venue
stringclasses
2 values
paper_content
stringlengths
7.54k
83.7k
prompt
stringlengths
161
2.5k
format
stringclasses
5 values
review
stringlengths
293
9.84k
ICLR
Title Tackling Diverse Tasks via Cross-Modal Transfer Learning Abstract Fine-tuning large-scale pretrained models has led to remarkable progress in wellstudied modalities such as vision and NLP. However, similar gains have not been observed in many other tasks due to an assumed lack of relevant pretrained models for these diverse modalities. In this work, we revisit this assumption by studying the cross-modal transfer ability of large-scale pretrained models. We introduce ORCA, a general cross-modal fine-tuning workflow that enables fast and automatic exploitation of existing pretrained models for diverse tasks. ORCA achieves taskspecific adaptation by performing data alignment before fine-tuning: it learns an embedding network that minimizes the optimal transport dataset distance between the end-task data and the pretraining data to close the modality gap. Through extensive experiments, we show that ORCA is the first viable approach that allows practitioners to use pretrained models to outperform hand-designed, AutoMLsearched, and general-purpose architectures—ORCA obtains state-of-the-art results on 10 of 13 diverse tasks we evaluate and ranks among the top three on the others. We shed light on why cross-modal transfer works by quantifying the importance of data alignment and highlight ORCA’s utility for data-limited domains. 1 INTRODUCTION The success of machine learning (ML) in vision and natural language processing (NLP) has spurred its application beyond these traditional ML domains to diverse tasks such as solving partial differential equations (Li et al., 2021b), music modeling (Lewandowski et al., 2012), detecting cardiac disease (Hong et al., 2020), and many others. However, progress in these less-explored areas can be challenging due to (1) limited amounts of labeled data, (2) high computational cost and human effort for developing models from scratch, and (3) a lack of relevant large-scale pretrained models, which have in many cases obviated the first two issues in vision and NLP (e.g., Devlin et al., 2019; Carion et al., 2020; Dosovitskiy et al., 2021; Liu et al., 2021b; Radford et al., 2021). There are two common approaches for practitioners to handle these issues: automated machine learning (AutoML) techniques (e.g., Roberts et al., 2021; Shen et al., 2022) that focus on designing task-specific networks in a data-efficient manner; and multimodal general-purpose methods that either propose flexible architectures applicable to various tasks (Jaegle et al., 2022a) or expand the set of modalities for which pretrained models exist (e.g., Reed et al., 2022; Lu et al., 2022a). However, both classes of approaches require training from scratch when applied to a new modality and proceed under the assumption of a lack of relevant pretrained models for these diverse problems. In this work, we re-examine this assumption by considering the general problem of cross-modal transfer. Our goal is to exploit existing large-scale pretrained models in data-rich modalities for solving diverse downstream tasks. A few recent works have demonstrated the potential promise of cross-modal transfer by applying language transformers to vision (Kiela et al., 2019; Dinh et al., 2022; Lu et al., 2022b), referential games (Li et al., 2020c), and reinforcement learning (Reid et al., 2022). However, many of these approaches are ad-hoc (e.g., rely on manual prompt engineering or hand-craft new architecture components to solve specific tasks), and none of them yield models competitive with those trained from scratch. We tackle both shortcomings in our work. We introduce a general-purpose, cross-modal transfer workflow called ORCA (Optimal tRansport Cross-modal Adaptation) that yields state-of-the-art results on a wide range of non-text and nonvision problems using pretrained transformers (Figure 1). Our key insight is to align the feature distribution of an unfamiliar, out-of-modality dataset with that of a familiar, in-modal dataset before fine-tuning. This data alignment process not only prevents distortion of pretrained weights but also enables cross-modal knowledge transfer, as we will show via extensive experiments in Section 4. Concretely, for any downstream task, we first generate an embedding network that maps the (potentially high-dimensional) inputs to sequence features. Then, we train it to minimize the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) between the feature-label distribution of the target data and data from the pretraining domain1. Finally, we fine-tune the pretrained model and the embedding network. Using OTDD allows us to relax many distributional assumptions required by traditional domain adaptation and perform data alignment using both the feature and label information of the target data. However, we show in an ablation study in Section 4.2.1 that substituting OTDD with other distance metrics, such as maximum mean discrepancy (MMD) (Gretton et al., 2012), can also aid cross-modal transfer, albeit to a lesser extent. This implies that it is the general idea of first-align-then-fine-tune that enables ORCA to obtain significantly better results than previous cross-modal learning methods that rely on vanilla fine-tuning (Lu et al., 2022b). We evaluate ORCA on a diverse set of 13 tasks with different input dimensions (1D and 2D), prediction types (point and dense), and modalities (vision, audio, electrocardiogram, physics, protein, genomics, cosmic-ray, and music). ORCA outperforms various competitors, including task-specific hand-designed architectures, leading AutoML methods, and general-purpose models, ranking first on 10 tasks and in the top three on all tasks. We compare ORCA with existing fine-tuning techniques and confirm that effective cross-modal transfer is only enabled by ORCA’s feature alignment process. We further reveal an empirical correlation between the alignment quality and the downstream performance. Finally, we demonstrate ORCA’s efficacy for limited-data tasks. Overall, our work not only explores the cross-modal transfer ability of pretrained models, but also establishes a practical workflow for solving diverse prediction problems efficiently and automatically. 2 RELATED WORK In this section, we review several groups of related work in the areas of AutoML, in-modal transfer learning (unimodal domain adaptation, unimodal/multimodal fine-tuning, and general purpose methods), and cross-modal transfer learning (heterogeneous domain adaptation, task-specific finetuning, and FPT). Table 1 summarizes these groups along relevant axes, and contrasts them to ORCA. AutoML for diverse tasks is a growing research area, as evidenced by the NAS-Bench-360 benchmark (Tu et al., 2022), along with several recent neural architecture search (NAS) methods that target this problem, e.g., AutoML-Zero (Real et al., 2020), XD (Roberts et al., 2021), and DASH (Shen et al., 2022)). In contrast to these NAS methods, ORCA takes a transfer learning approach in order to leverage existing pretrained models from data-rich modalities for more esoteric tasks, rather than repeatedly incurring the overhead of designing new architectures and training them from scratch. That said, given the shared underlying motivation, our experimental evaluation makes use of the diverse 1We do not assume access to the pretraining data due to practical concerns about data access and computational efficiency. We instead work with publicly available proxy data from the pretraining modality, e.g., CIFAR-10 for models pretrained on ImageNet and CoNLL-2003 for models pretrained on larger text corpora. tasks comprising NAS-Bench-360, and compares ORCA with its expert and AutoML baselines. We also compare against DASH, the state-of-the-art method on this benchmark. Unimodal domain adaptation (DA) is a form of transductive transfer learning where the source and target tasks are the same but the domains differ (Pan & Yang, 2009; Wang & Deng, 2018). Many DA methods assume that the target data has the same input space and support as the source data, and are concerned with problems where the output spaces and the joint/marginal distributions differ, such as covariate and label shifts. Recent work considers more general settings such as different feature spaces (heterogeneous DA) or label spaces (universal DA). Our focus on cross-modal transfer goes one step further to the case where neither the input-space nor the output-space support overlaps. Unimodal fine-tuning is a more flexible transfer approach that can be applied to downstream tasks with different label spaces or input spaces. Pretrained models are used for in-modality fine-tuning in NLP (e.g., Aghajanyan et al., 2021; Jiang et al., 2020), vision (e.g., Wei et al., 2022; Li et al., 2022), speech (e.g., Chen et al., 2022; Jiang et al., 2021), protein sequences (Jumper et al., 2021), and robotics (Ahn et al., 2022). Adapter networks (He et al., 2022) have been developed to improve the downstream performance of in-modality transfer. Multimodal fine-tuning expands the applicable modalities of a single pretrained model by learning embeddings of several data-rich modalities together (e.g., Lu et al., 2019; Radford et al., 2021; Hu & Singh, 2021; Kim et al., 2021; Alayrac et al., 2022). However, these approaches still focus on solving in-modality downstream tasks. General-purpose models propose flexible architectures applicable to various tasks such as optical flow, point clouds, and reinforcement learning (Jaegle et al., 2021; 2022a; Reed et al., 2022). These approaches train multitask transformers from scratch using a large body of data from different tasks. Though more versatile than unimodal models, they still focus on transferring to problems within the pretraining modalities considered. Nonetheless, the success of transformers for in-modality finetuning motivates us to focus on adapting transformer-type architectures for cross-modal transfer. Heterogeneous DA (HDA) considers nonequivalent feature spaces between the source and target domains. While most HDA methods are developed for same-modality-different-dimension transfer, e.g., between images of different resolutions, there are indeed a few works studying cross-modal tasks such as text-to-image (Yao et al., 2019; Li et al., 2020b). However, a crucial assumption that HDA makes is that the target and source tasks are the same. Thus, we operate in a much more flexible setting and consider knowledge transfer between drastically different domains with distinct tasks and label sets, such as applying Swin Transformers (Liu et al., 2021c) to solving partial differential equations or RoBERTa to classifying satellite images and electrocardiograms. Cross-modal, task-specific fine-tuning is a recent line of research, with most work focusing on transferring NLP models to other modalities like vision (Kiela et al., 2019), referential games (Li et al., 2020c), and reinforcement learning (Reid et al., 2022). These works provide initial evidence of the cross-modal transfer capacity of pretrained models. However, they focus on hand-tailoring to a single modality, e.g., by adding ad-hoc encoders that transform agent messages (Li et al., 2020c) or decision trajectories (Reid et al., 2022) into tokens. Even when not relying on fine-tuning, work like LIFT (Dinh et al., 2022) that attempts cross-modal learning via prompting (Liu et al., 2021a) still require ad-hoc conversion of tasks to natural text. Frozen Pretrained Transformers (FPT) (Lu et al., 2022b) is a general cross-modal fine-tuning workflow that transforms input features to be compatible with the pretrained models. Although FPT and ORCA are both general-purpose workflows, FPT does not account for differences between the target and pretraining modalities, which we show is necessary to achieve accurate predictive models and outperform existing baselines. 3 ORCA WORKFLOW In this section, we first formalize the problem setup and then introduce the ORCA workflow for adapting pretrained transformers to diverse end tasks. Problem Setup. A domain D consists of a feature space X , a label space Y , and a joint probability distribution P (X ,Y). In the cross-modal setting we study, the target (end-task) domain Dt and source (pretraining) domain Ds differ not only in the feature space but also the label space and by extension have differing probability distributions, i.e., X t ̸= X s, Yt ̸= Ys, and P t(X t,Yt) ̸= P s(X s,Ys). This is in contrast to the transductive transfer learning setting addressed by domain adaptation, where source and target domains share the label space and end task (Pan & Yang, 2009). Given target data {xti, yti}i∈[nt] sampled from a joint distribution P t in domain Dt, our goal is to learn a model mt that correctly maps each input xt to its label yt. We are interested in achieving this using pretrained transformers. Thus, we assume access to a model ms that has been trained with data {xsi , ysi }i∈[ns] in the source domain Ds, where (xsi , ysi ) ∼ P s. Then, given a predefined loss function l, we aim to develop mt based on ms such that L(mt) = E(xt,yt)∼P t [l(mt(xt), yt)] is minimized. This problem formulation does not define modality explicitly and includes both inmodal and cross-modal transfer. Given the generality of the tasks we wish to explore, it is hard to provide a precise mathematical definition, so we rely on semantics to differentiate the two settings: intuitively, cross-modal domains (e.g., natural images vs. protein sequences) are more distinct to each other than in-modal domains (e.g., photos taken in two different geographical locations). Having defined the learning problem, we now present our three-stage cross-modal transfer workflow: (1) architecture design to support diverse input-output dimensions, (2) embedder pretraining to align the source and target feature distributions, and (3) fine-tuning to minimize the target task loss. 3.1 TASK-SPECIFIC ARCHITECTURE DESIGN Applying pretrained models to another downstream problem usually requires addressing the problem of mismatched dimensions. To make ORCA work for input and output tensors of different dimensions, we decompose a transformer-based learner m into three parts (Figure 1 stage 1): an embedder f that transforms input x into a sequence of features, a model body g that applies a pretrained transformer (i.e., series of attention layers) to the embedded features, and a predictor h that generates predictions with the desired output shape. ORCA uses pretrained architecture and weights to initialize the model body g but replaces f and h with layers designed to match the target data with the pretrained model’s embedding dimension. Next, we describe each module in detail. Custom Embedding Network. Denote the feature space compatible with the pretrained model body as Ẋ . For a transformer with maximum sequence length S and embedding dimension D, Ẋ = RS×D. The embedding network f : X → Ẋ is designed to take in a tensor of arbitrary dimension from X and transform it to the feature space Ẋ . In ORCA, f is composed of a convolutional layer with input channel cin, output channel cout, kernel size k, and stride k, generalizing the patching operations used in vision transformers to 1D and higher-dimensional cases. We set cin to the input channel of x and cout to the embedding dimension D. To take full advantage of the representation power of the pretrained model, we choose the smallest k for which the product of output shape excluding the channel dimension ≤ S. That is, when we flatten the non-channel dimensions of the output tensors after the convolution, pad and then transpose it, we can obtain sequence features with shape S ×D. Finally, we add a layer norm and a positional embedding to obtain ẋ.2 Pretrained Transformer Body. The model body g takes the embedding ẋ ∈ Ẋ as input and outputs features ẏ ∈ Ẏ; the dot is used to differentiate these intermediate representations from the raw inputs and labels. For transformer-based g, both the input and output feature spaces Ẋ , Ẏ are RS×D. Custom Prediction Head. Finally, the prediction head h must take ẏ ∈ Ẏ as input and return a task-dependent output tensor. Different tasks often specify different types of outputs, e.g., classification tasks require logits in RK where K is the number of classes, and dense prediction tasks require dense maps with the same spatial dimension as the input and per index logits correspond- 2As a concrete example, consider an image tensor with shape (Cin, Hin,Win). We first choose stride k for the convolution such that Hout × Wout ≈ S to get an output tensor with shape (D,Hout,Wout). Then, we flatten it to shape (D,Hout ×Wout), pad along the last dimension to shape (D,S), and transpose. ing to K classes. Thus, it is crucial to define task-specific output modules and fine-tune them when transferring to new tasks. In our workflow, we use the simplest possible instantiation of the predictor modules. For classification, we apply average pooling along the sequence length dimension (or take the classification token of language models) to obtain 1D tensors with length D and then use a linear layer that maps D to K. For dense prediction, we apply a linear layer to the sequence outputs so the resulting tensor has shape (S, kdim(Y)−1K), where kdim(Y)−1 is the downsampling factor of the embedder convolution kernel with stride k. This upsamples by the same factor that the embedder convolution downsampled. Then, we can mold the tensor to the desired output dimension.3 With an architecture based on the pretrained model but is also compatible with our target task, we can now turn our attention to pretraining the embedder via matching source and target distributions. 3.2 EMBEDDING LEARNING DOR DATA ALIGNMENT Intuitively, transferring knowledge across similar modalities should be easier than across distant ones. Hence, given a target task in a new modality, we aim to manipulate the task data so that they become closer to the pretraining modality. We use the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) to measure the closeness between datasets in different domains. Unlike the classic OT distance which operates only on the feature distributions, OTDD considers both the feature and the label information and can work even if the label sets are unrelated or disjoint. Thus, while OT is mostly used for unsupervised or semi-supervised domain adaptation (Courty et al., 2017; Yan et al., 2018), OTDD is particularly suitable for distance estimation between cross-modal labeled datasets. In the following, we will briefly explain how ORCA uses OTDD. We refer the readers to Alvarez-Melis & Fusi (2020) for a detailed exposition on the metric itself. Formally, let fs : X s → Ẋ denote the pretrained embedder (the part of ms that transforms the source data to sequence features) and f t : X t → Ẋ be a randomly initialized target embedder with architecture discussed in the previous section. We train f t to minimize the expected OTDD between the embedding-label distributions ( f t(xt), yt ) and ( fs(xs), ys ) . That is, for both datasets, we first represent each class label as a distribution over the in-class features: y 7→ P (Ẋ |Y = y). This transforms the source and target label sets into the shared space of distributions over Ẋ . Then, we can define the distance dY(yt, ys) between different labels using the p-Wasserstein distance associated with a metric dẊ over the feature space, e.g., the l2 distance ∥ẋt − ẋs∥22. This allows us to measure the difference between distributions in Ẋ ×Y using the following p-Wasserstein metric: dẊ×Y ( (ẋt, yt), (ẋs, ys) ) = ( dẊ (ẋ t, ẋs)p + dY(y t, ys)p )1/p . (1) Plugging this into the OT formulation leads to the OTDD over Ẋ ×Y , which we optimize to learn f t. Leveraging the clustering structure of the datasets, OTDD provides a better distance estimation between the target and source data, demonstrating better alignment ability in practice. In Section 4.2.1, we examine several distance metrics for learning the embedder. While data alignment generally improves downstream performance for all metrics, OTDD leads to the best empirical results. As for the computational cost of embedding learning, we analyze the complexity of OTDD in Appendix A.1 and show that this stage takes much less time than the later fine-tuning stage in Appendix A.5. Thus, ORCA achieves a significant performance gain at a low cost for feature alignment. 3.3 FINE-TUNING FOR DOWNSTREAM ADAPTATION After training the embedder, we perform full fine-tuning by updating all model parameters to minimize the target loss. This step further aligns the embedder and predictor with the pretrained model to improve downstream performance. We perform an ablation study comparing ORCA to standard fine-tuning without feature matching in Section 4.2.1 and show that our approach improves prediction accuracy and reduces performance variance. There are orthogonal lines of work that study how to best fine-tune a pretrained model (e.g., Liu et al., 2022; He et al., 2022). We compare with one strategy used in FPT (Lu et al., 2022b) in Section 4.2.2 but leave further exploration for future work. 3As a concrete example, for an image tensor with embedding convolution kernel size k, the linear layer will yield an output of shape (S, k2K), which we transpose, pad, and reshape to (k2K,Hout,Wout). Finally, we apply pixelshuffle (Shi et al., 2016) to get an output of shape (K,Hin,Win). 4 EXPERIMENTS Having introduced how ORCA addresses the dimension mismatch between the target and source datasets via architecture design and tackles the distribution mismatch via embedder learning, we now proceed with showing its empirical effectiveness. In the following, we will demonstrate that ORCA is the first approach that allows practitioners to obtain models better than hand-designed, AutoMLsearched, and general-purpose architectures on a variety of diverse tasks. Then, we will analyze key components of ORCA to better understand the mechanism underlying cross-modal transfer. Experiment Protocol. While our workflow accepts a wide range of pretrained transformers as model bodies, we use RoBERTa (Liu et al., 2019b) and Swin Transformers (Liu et al., 2021c), which are representatives of the most studied language and vision modalities, to exemplify ORCA’s efficacy. We implement the base models, which have around 100 million parameters, and use the pretrained weights available in the Hugging Face transformers library (Wolf et al., 2019). As stated in the introduction, we do not use the exact pretraining data to represent the source modalities because they are often not publicly available and can be too large to compute OTDD efficiently. We use the proxy datasets: CoNLL-20034 for RoBERTa and CIFAR-10 for Swin. For each task, we first apply the hyperparameter tuning algorithm ASHA (Li et al., 2020a) to the standard fine-tuning baseline (“Fine-tuning” in Table 3) to identify suitable batch size, optimizer, learning rate, and weight decay. These hyperparameters are then applied to all fine-tuning baselines as well as ORCA. During embedder learning, while classification tasks naturally come with discrete labels required for computing OTDD, for dense prediction tasks where labels are high-dimensional maps, we perform clustering on the dense maps to generate pseudo labels, which not only preserves the intrinsic distribution of the target data but also speeds up OTDD computation. We manage our experiments using the Determined AI platform. All experiments are performed on NVIDIA V100 GPUs and results are averaged over 5 random seeds. For other experiment details, see Appendix A.3. 4.1 CAN PRETRAINED MODELS TRANSFER ACROSS MODALITY TO SOLVE DIVERSE TASKS? In this section, we highlight the most important observation of this work: cross-modal fine-tuning with ORCA can solve a variety of tasks effectively and efficiently. To demonstrate this, we evaluate ORCA on 13 tasks detailed below. We first include 10 tasks from NAS-Bench-3605, which covers problems such as PDE solving, protein folding, and cardiac disease detection. This benchmark contains tasks for 1D and 2D classification, 2D dense prediction, but not 1D dense prediction, so we added JSB Chorales, a music modeling dataset widely used for evaluating recurrent networks (Chung et al., 2017; Bai et al., 2018). We also added ListOps (parsing math expressions) (Tay et al., 2021) and Homology (classifying protein structure) (Rao et al., 2019) for comparison with FPT. Together, these 13 tasks represent a wide collection of modalities for comprehensive evaluation. Following the taxonomy in Table 1, we consider three classes of baselines: (1) hand-designed expert architectures for each task, as identified by Tu et al. (2022), Rao et al. (2019), and Tay et al. (2021); (2) generalpurpose models, as represented by Perceiver IO (Jaegle et al., 2022b); and (3) AutoML baselines, as represented by those evaluated in NAS-Bench-360 and DASH (Shen et al., 2022). We will compare with FPT later, the only remaining approach with a general workflow from Table 1. In Table 2, we report the prediction error for each method on each task. ORCA achieves the lowest error rate on 10 of 13 tasks and is the most effective in terms of aggregated performance. This is also supported by the performance summary in Figure 2. More specifically, we outperform all hand-designed architectures on all tasks except ECG, where we rank second but do much better than the other 4CoNLL-2003 is for named entity recognition. It is used to interpret language models (Jawahar et al., 2019). 5NAS-Bench-360 is designed for testing how well ML algorithms can generalize and is a core component of the 2022 AutoML Decathlon competition. For a summary of included tasks, see Table 6 in the Appendix methods. We also beat all AutoML baselines on all tasks except DeepSEA and NinaPro, where ORCA is second and third, respectively. The improvements from ORCA come at a small computational overhead associated with pretraining the embedder to match the source and target modalities. Table 5 in the Appendix shows the time needed for embedder learning with OTDD, which is a small portion (10.2% on average) of the fine-tuning time. ORCA’s efficiency and its state-of-the-art results on 10 tasks make it a practical tool for model development in diverse areas. Our experiments further validate the findings in Lu et al. (2021) that pretrained transformers can learn knowledge transferable to seemingly unrelated tasks. In the following, we delve into the mechanism of ORCA to provide intuition for necessary components of successful cross-modal learning. 4.2 KEY FACTORS FOR SUCCESSFUL CROSS-MODAL TRANSFER Here, we dissect the success of cross-modal transfer with ORCA through a series of ablation studies. As a preview, we identify three aspects to be key to its success: data alignment through embedding learning, full fine-tuning of all model weights, and suitable pretrained model selection. 4.2.1 MATCHING FEATURE DISTRIBUTIONS HELPS ADAPTATION Table 2 shows that ORCA leads to effective transfer using the proposed three-stage workflow. However, as learning the embedder via OTDD is an instantiation of the general first-align-then-fine-tune paradigm, we ask the question: does cross-modal transfer work because of the specific application of OTDD or the core approach of data alignment across modalities? To answer this question, we perform an ablation study on the embedding learning metrics and compare their performance to fine-tuning without embedder pretraining. We experiment with OTDD, maximum mean discrepancy (MMD) (Gretton et al., 2012), the mutual-informationbased TransRate (Huang et al., 2022), and Euclidean distance. The performance profile is in shown Figure 3, and the detailed results are shown in Appendix A.4. We highlight the following observations. First, bringing the target modality closer to the pretraining modality generally aids cross-modal transfer, regardless of which metric we minimize. This is evident from the fact that pretraining the embedder with any of the metrics can outperform vanilla fine-tuning without embedder learning on many tasks. Second, among the evaluated metrics, OTDD leads to the best overall performance. This is why we use it in our workflow. The middle rows of Table 3 demonstrate that ORCA with OTDD consistently outperforms naive fine-tuning. This supports our argument that closing the gap between a new modality and the pretraining modality can facilitate a model’s adaptation to a new task. To further isolate the impact of data alignment, we compare ORCA with a train-from-scratch baseline, which trains RoBERTa and Swin using only the target data for the same number of epochs. Table 3 shows that train-from-scratch is better than fine-tuning but worse than ORCA on many tasks like Satellite and DeepSEA, indicating that when the target modality differs significantly from the pretraining modality, naive fine-tuning may harm transfer, but aligning the feature distribution using ORCA can resolve this issue and benefit transfer. Indeed, recent work has shown that optimizing directly for the task loss may distort the pretrained weights and lead to suboptimal solutions (Kumar et al., 2022; Lee et al., 2022). By manipulating the target data distribution to look like the source distribution, we can lower the risk of catastrophic forgetting, which may explain the success of ORCA. Lastly, we perform experiments to quantify the effect of data alignment from a task-wise perspective. We train the embedder for different number of epochs before fine-tuning to see how optimizing OTDD to various levels of convergence affects downstream performance. Figure 4 plots the finetuning accuracy along with the final OTDD objective for different levels of embedder pretraining. Evidently, as the dataset distance decreases, the final fine-tuning accuracy increases. This correlation supports the effectiveness of embedder learning for cross-modal transfer. In addition, we observe that learning the embedder prior to fine-tuning can stabilize training, as the performance variance of ORCA is consistently lower than that of standard fine-tuning. 4.2.2 FINE-TUNING ALL MODEL PARAMETERS FOR CROSS-MODAL TASKS As discussed in Section 2, Frozen Pretrained Transformers (FPT) (Lu et al., 2022b) is a related work that showed pretrained language models contain knowledge relevant for out-of-modality tasks. While FPT presented a general pipeline that transfers GPT-2 to tasks like CIFAR-10, Homology, and ListOps, the resulting models were not as good as those directly trained on the target data. FPT differs from ORCA in that (1) it does not pretrain an embedder for task-specific adaptation and (2) it only fine-tunes the layer norms. We have already verified the importance of (1). Now, to isolate the impact of (2), we evaluate ORCA with fine-tuning the layer norms vs. FPT on our task set. The bottom rows of Table 3 show ORCA with fine-tuning just the layer norms outperforms FPT, indicating pretraining the embedding layers boosts the cross-modal performance of FPT. However, this performance gain is smaller than that seen in the full fine-tuning setting, which implies that full fine-tuning can take better advantage of the learned embeddings. Also, partial fine-tuning is less effective than full fine-tuning on all tasks except for DeepSEA. This exception might be due to the fact that full fine-tuning without learned embeddings is more prone to overfitting. In terms of runtime, FPT only results in less than 2x speedups compared with full fine-tuning (see Appendix A.5), despite the fact that we are updating significantly fewer parameters. This is unsurprising since gradients are still back-propagated through the entire network. Therefore, when computation allows, we recommend using ORCA with full fine-tuning for better downstream performance. 4.2.3 PRETRAINING MODALITY CAN AFFECT TRANSFER PERFORMANCE Finally, we study how the pretraining modality affects fine-tuning performance. For experiments in Table 2, we chose pretrained models for each task based on the input dimension, i.e., we use RoBERTa for all 1D tasks and Swin for all 2D tasks. Now, we can switch the model bodies and apply ORCA. This is easy to implement because ORCA is model-agnostic and the embedder architec- ture handles all necessary input transformation to obtain sequence features. As shown in Table 4, fine-tuned RoBERTa outperforms fine-tuned Swin on the 1D task, and the final OTDD objective for RoBERTa is also smaller than that of Swin. We hypothesize that this is because the considered DeepSEA data (genomics sequences) are structured more like language than images with discrete units of information and general grammatical rules. The FPT paper observes a similar trend for Homology. As for the 2D tasks, we again notice that models with better fine-tuning accuracy have smaller OTDDs. This suggests a way of selecting pretrained models from a predefined model hub for each task, e.g., by comparing the optimized OTDDs and picking the one with the smallest value. Case Study: Low-Data Regime. Now that we have a better understanding of ORCA, recall that one of our motivations for transferring pretrained models to various modalities is to help task-solving in data-limited regimes, where training models from scratch can be challenging. To this end, we investigate whether ORCA can facilitate fine-tuning large-scale models on small target datasets. Indeed, for vanilla fine-tuning, a small amount of data may not give enough signal to update the pretrained weights. However, it is possible to obtain a good feature embedder with the same amount of data using ORCA, which can then make fine-tuning easier. In Figure 5, we vary the amount of target data and plot the performance of ORCA and vanilla fine-tuning. The performance gain of ORCA increases as the amount of data used decreases. This shows that fine-tuning does suffer from limited data, but ORCA can considerably alleviate the problem and improve downstream performance. Moreover, ORCA allows us to use a third of the data to match the performance of standard fine-tuning. Thus, it can benefit model development in domains where data collection is costly. Discussion and Future Work. We identify several future directions based on our experiment results. First, it is worth studying the effect of pretraining modality further and develop a systematic way of selecting pretrained models. Then, we can incorporate model selection into ORCA for a more automated transfer pipeline. Second, while ORCA leverages the simplest fine-tuning paradigm, we believe it is possible to combine it with more sophisticated transfer techniques such as adapters (He et al., 2022). We briefly study how prompting (Bahng et al., 2022; Jia et al., 2022) can be applied to diverse tasks in Appendix A.6 and find that it is in general less effective for out-of-modality problems, so we can possibly boost its performance using ORCA. Lastly, we currently evaluate ORCA on diverse 1D/2D tasks and in-modality vision tasks (Appendix A.7). It is also important to validate it on more settings, such as high-dimensional problems and reinforcement learning (Reid et al., 2022). 5 CONCLUSION In this paper, we argue that an important step towards developing more general ML methods is to study how we can reuse existing models effectively for new and less-explored tasks. To this end, we propose a novel framework that allows transferring pretrained transformers to distinct downstream modalities. Our method, ORCA, can map target data from an arbitrary end task’s modality to a model’s pretraining modality to improve fine-tuning performance. We believe that this work not only signals the potential of large-scale pretraining for diverse tasks but also lays out a path for a largely uncharted data-centric paradigm in machine learning. A APPENDIX A.1 EMBEDDING LEARNING WITH OPTIMAL TRANSPORT DATASET DISTANCE A.1.1 LITERATURE REVIEW Due to the limited space, we do not give a full review of the optimal transport dataset distance (OTDD) (Alvarez-Melis & Fusi, 2020) in the main text. Here, we briefly recall the optimal transport (OT) distance and explain OTDD in detail. Consider a complete and separable metric space X and let P(X ) be the set of probability measures on X . For α, β ∈ P(X ), let Π(α, β) be the set of joint probability distributions on X × X with marginals α and β in the first and second dimensions respectively. Then given a cost function c(·, ·) : X × X → R+, the classic OT distance with cost c is defined by: OTc(α, β) := min π∈Π(α,β) ∫ X×X c(x, y)dπ(x, y). (2) When X is equipped with a metric dX , we can use c(x, y) = dX(x, y)p for some p ≥ 1 and obtain the p-Wasserstein distance, Wp(α, β) := (OTdpX (α, β)) 1 p . Now consider the case of finite datasets with features in X and labels in a finite set Y . Each dataset can be considered a discrete distribution in P(X × Y). To define a distance between datasets, a natural approach is to define an appropriate cost function on Z := X × Y and consider the optimal transport distance. Indeed, for any metric dY on Y and any p ≥ 1, Z can be made a complete and separable metric space with metric dZ((x, y), (x ′, y′)) = (dX (x, x ′)p + dY(y, y ′)p) 1 p (3) It is usually not clear how to define a natural distance metric in Y , so instead we proceed by representing each class y ∈ Y by P (X|Y = y), the conditional distribution of features X given Y = y. More specifically, for a dataset D ∈ P(X × Y), denote this map from classes to conditional distributions by F (D, ·) : Y → P(X ). Then we can transform any dataset over X × Y into one over X × P(X ) via G(D) := (projX , F (D, projY )). As discussed above, Wp is a natural notion of distance in P(X ), so by substituting Y 7→ P(X ) and dY 7→ Wp in Equation 3, we can define the (p-)optimal transport dataset distance between datasets DA and DB by OTDD(DA,DB) := OT (dpX×W p p ) 1 p (G(DA), G(DB)) (4) A.1.2 COMPUTATIONAL CONSIDERATIONS As we aim for a practical fine-tuning workflow, computational cost is a crucial concern. While Alvarez-Melis & Fusi (2020) proposed two variants of OTDD—the exact one and a Gaussian approximation, we observe from our experiments that optimizing the exact OTDD leads to better performance. In the following, we will focus on analyzing the computational cost of the exact OTDD. Given datasets with D-dimensional feature vectors, estimating vanilla OT distances can be computationally expensive and has a worst-case complexity of O(D3 logD) (Pele & Werman, 2009). However, adding an entropy regularization term ϵH(π|α⊗β) to Equation 2, where H is the relative entropy and ϵ controls the time-accuracy trade-off, can be solved efficiently with the Sinkhorn algorithm (Cuturi, 2013). This reduces OT’s empirical complexity to O(D2) and makes the time cost for computing OTDD manageable for ORCA’s workflow. During implementation of ORCA, we also observed memory issues for computing OTDD using the entire target and source datasets on GPUs. To alleviate this, we propose a class-wise subsampling strategy for approximating OTDD on GPUs (Algorithm 1). In short, we split the K-class target dataset into K datasets based on the labels and compute the class-wise OTDD between each singleclass target dataset and the entire source dataset. Each class-wise OTDD can be approximated with the average of batch samples similar to how stochastic gradient descent approximates gradient descent. After that, we approximate the OTDD between the target and source datasets using the Algorithm 1 Efficient approximation of OTDD using class-wise subsampling. Input: target dataset {xt, yt}, number of target classes Kt, source dataset S = {xs, ys}, subsample size b, subsample round R for each class i ∈ [Kt] in the target dataset do weighted sum of the K class-wise OTDDs. To verify that the approximation works empirically, we track the approximated OTDD (computed on GPUs) and the actual OTDD (computed on CPUs) and visualize the loss curves during ORCA’s embedder learning process (Figure 6). We can see that the estimated value adheres to the actual value. Leveraging both the Sinkhorn algorithm and class-wise approximation, the embedder learning process only takes up a small fraction of the total fine-tuning time in practice, as shown in Table 5. Hence, we invest a reasonable time budget but achieve significantly improved cross-domain transfer performance using ORCA. A.2 INFORMATION ABOUT EVALUATION TASKS A.3 EXPERIMENT DETAILS Below, we summarize details for implementing ORCA and evaluating it on the selected 13 tasks. The code and configuration file for reproducing each experiment can be found in the supplementary material. We will also release ORCA’s best checkpoint for each task later. A.3.1 PRETRAINED MODELS We evaluated ORCA with two pretrained models in our experiments. In Table 2, for all 2D tasks including CIFAR-100, Spherical, Darcy Flow, PSICOV, Cosmic, NinaPro, and FSD50K, we use the following model. As Swin has a pretrained resolution, we reshape the inputs for our tasks to the resolution before feeding them into the model. Name Pretrain Resolution Num Params FLOPS FPS Swin-base (Liu et al., 2021c) ImageNet-22K 224×224 88M 15.4G 278 For all 1D tasks including ECG, Satellite, DeepSEA, JSB Chorales, ListOps,and Homology, we use the following model: Name Pretrain Num Params FLOPS RoBERTa-base (Liu et al., 2019b) Five English-language corpora 125M 1.64E20 We use the Hugging Face transformers library Wolf et al. (2019) to implement the pretrained models. A.3.2 TASK DATA PREPARATION For all the NAS-Bench-360 tasks, each dataset is preprocessed and split using the script available on https://github.com/rtu715/NAS-Bench-360, with the training set being used for hyperparameter tuning, embedding learning, and fine-tuning. We obtain the data processing script for JSB data from https://github.com/locuslab/TCN, for ListOps from https://github.com/kzl/universal-computation, and for Homology from https://github.com/songlab-cal/tape. A.3.3 HYPERPARAMETER TUNING As ORCA is both task-agnostic and model-agnostic, it can be applied to fine-tuning a variety of pretrained transformers on drastically different end tasks with distinct datasets. Hence, it is hard to define one set of fine-tuning hyperparameters for all (model, task) pairs. At the same time, optimizing large-scale pretrained transformers can be challenging due to their large model sizes, as the downstream performance depends largely on the hyperparameters used. For instance, using a large learning rate can distort pretrained weights and lead to catastrophic forgetting. Therefore, in our experiments, given a (model, task) pair, we first apply hyperparameter tuning using the Asynchronous Successive Halving Algorithm (ASHA) (Li et al., 2020a) to the standard fine-tuning setting (i.e., after initializing the embedder and predictor architectures, directly updating all model weights to minimize the task loss) to identify a proper training configuration. Then, we use the same set of hyperparameters found for all our experiments for the particular (model, task) combination. Note that even though we did not explicitly state this in the main text, the hyperparameter tuning stage can be directly integrated into the ORCA workflow between stage 1 and stage 2. In this sense, ORCA is still an automated cross-modal transfer workflow that works for diverse tasks and different pretrained models. The configuration space for ASHA is as follows: • Batch size: 32, 128, 512, 1024 for Swin; 16, 56, 256, 512 for RoBERTa • Optimizer: SGD, Adam, AdamW • Learning rate: 1E-2, 1E-3, 1E-4, 1E-5, 1E-6 • Weight decay: 1E-2, 1E-3, 1E-4, 1E-5, 1E-6 Note that to fit each experiment on a single GPU, we set a fixed batch size (32 for Swin and 16 for Roberta) and vary the gradient accumulation step instead of actually varying the batch size, but the effect is the same. A.3.4 ORCA ONLY: EMBEDDING LEARNING WITH OTDD After initializing the embedder architecture for each task, we train it to minimize the OTDD between the embedded target features and embedded source features. For source datasets, we use CIFAR-10 for Swin and CONLL-2003 for RoBERTa. We sample 5000 data points to compute OTDD. In practice, we can pass the source data through the pretrained embedder once and save all the embedded features, so we don’t have to pay the cost of obtaining the source features each time we fine-tune a new model. For classification tasks, we directly use the labels provided by the end task to compute OTDD. For dense tasks, we perform K-Means clustering on the target data to obtain pseudolabels for OTDD computation. The number of clusters is set to the number of classes of the source dataset, e.g., 10 for 2D tasks that use CIFAR-10 as the source dataset. To compute the embedding learning objective, we use the OTDD implementation of the original paper provided here: https://github.com/microsoft/otdd. As for the hyperparameters, we use the batch size, learning rate, optimizer, and weight decay obtained from A.3.3. The others are fixed across different tasks: • Embedding learning epochs: 60 • Learning rate scheduler: decay by 0.2 every 20 epochs A.3.5 FINE-TUNING Besides the searched hyperparameters, we also fix the following hyperparameters for fine-tuning. • Fine-tuning epochs: 100 for Swin tasks, 60 for RoBERTa tasks • Learning rate scheduler: we use the linear decay with min lr = 0 and 5 warmup epochs A.3.6 TRAIN-FROM-SCRATCH This baseline is trained using the same hyperparameter configuration (number of epochs, batch size, learning rate, etc) as the fine-tuning baseline. A.3.7 EVALUATION When training/fine-tuning is finished, we evaluate the performance of all models following the NASBench-360 protocol. We first report results of the target metric for each task by running the model of the last epoch on the test data. Then, we report aggregate results via performance profiles (Dolan & Moré, 2002), a technique that considers both outliers and small performance differences to compare methods across multiple tasks robustly. In such plots, each curve represents one method. The τ on the x-axis denotes the fraction of tasks on which a method is no worse than a τ -factor from the best. The performance profile for our experiments is shown in Figure 2. A.4 ABLATION STUDY ON EMBEDDING LEARNING METRICS As motivated in Section 4.2.1, we present here an ablation study on the embedding learning metrics that we have considered for minimizing distribution dissimilarity. The results show that (1) performing feature alignment generally helps downstream adaptation, regardless of which metric we minimize; (2) OTDD leads to the best overall performance, so we chose it for our workflow. Our findings confirm that it is the general idea of data alignment, rather than a specific metric, that makes cross-modal transfer work. Specifically, we experiment with OTDD, maximum mean discrepancy (MMD) (Gretton et al., 2012), the mutual-information-based TransRate (Huang et al., 2022), and (pairwise) Euclidean distance. We learn the embedders to minimize these metrics and then fine-tune the pretrained models. The test errors are as follows. A.5 RUNTIME OF ORCA VS. FPT In Table 3, we compare with the FPT setting, which only fine-tunes the layer norms of the pretrained transformer models. As we have shown already, the downstream performance of fine-tuning only a subset of the parameters is less competitive than fine-tuning all parameters. Below, we show that the time saved for updating only layer norms is also not that significant. Therefore, we suggest performing full fine-tuning when time and computational resources allow. A.6 PROMPTING Apart from fine-tuning, a new paradigm of working with large-scale pretrained models is prompting, i.e., we do not update the pretrained weights but only modify the input and query the model for the desired output. Existing language prompting methods (e.g., Liu et al., 2022) are generally not suitable for cross-modal learning due to the difficulty of designing natural prompts for diverse data types. For the 1D tasks we study, there is even no notion of “discrete tokens.” Another line of work studies visual prompting by modifying 2D inputs for querying vision transformers. We test two such algorithms, VP (Bahng et al., 2022) and VPT (Jia et al., 2022), on three classification tasks in our task suite. They are not applicable to the remaining tasks because either the inputs cannot be reshaped to look like images or the outputs are not classification logits. We test VPT with the pretrained Swin-Base Transformer (the same model we used for ORCA) and VP with the pretrained ResNet-50 (as the official implementation does not support vision transformers). The results are shown in Table 9. In general, prompt tuning is less effective than fine-tuning, and the two baselines perform significantly worse than ORCA. This is not surprising given that prompting methods are more intuitively suited to in-modality transfer, where the target and the source data have similar structure or semantic meaning. However, when the target data (e.g., electromyography signals, as in the NinaPro dataset) is drastically different from image data, it is difficult to design prompts or expect good performance by only modifying the inputs without fine-tuning the pretrained models. A.7 COMPATIBILITY WITH IN-MODALITY TRANSFER A natural question to ask is whether ORCA can also tackle in-modality tasks. While we design ORCA to enable cross-modal transfer, we hypothesize that it should facilitate same-modality transfer if two domains have large dataset distance. To validate this, we test ORCA on DomainNet datasets, which are commonly used to evaluate homogeneous DA methods (Peng et al., 2019). From Table 10, we can see that ORCA achieves significantly better performance than the fine-tuning baseline, which shows that the feature matching of ORCA can also help in-domain generalization.
1. What is the focus of the paper regarding cross-modal transfer ability in large-scale pre-trained models? 2. What are the strengths and weaknesses of the proposed method, particularly in its application and comparisons with other works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Do you have any concerns or questions about the paper's claims and experimental results?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, the authors study the cross-modal transfer ability of large-scale pre-trained models. The authors present a general workflow to fast and automatically exploit existing pre-trained models for different tasks. The authors provide some experiments to evaluate the proposed method. Strengths And Weaknesses Strengths: The authors have evaluated the proposed method on many different datasets. The motivation of this work is interesting. Weaknesses: The presentation of the paper is not clear. For example, Figure 1 cannot give us clear information (e.g., how does ORCA work? What represents cross-modal transfer?) This work lacks novelty. It utilizes the existing techniques to fine-tune models on different datasets, which is incremental. Although this paper evaluates the proposed method on many different datasets, it lacks sufficient comparison experiments with state-of-the-art baselines. The authors claim their method could achieve task-specific adaption. Why does it need fine-tune? If it could achieve task-specific adaption on the upstream pre-training, it could achieve good performance on the downstream tasks without finetuning. Clarity, Quality, Novelty And Reproducibility The paper is not clear and hard to follow. This work is incremental and lacks novelty.
ICLR
Title CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning Abstract Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets. However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements. Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive. We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure. In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved. Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning. In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable. Using CATER, we provide insights into some of the most recent state of the art deep video architectures. 1 INTRODUCTION While deep features have revolutionized static image analysis, video descriptors have struggled to outperform classic hand-crafted descriptors (Wang & Schmid, 2013). Though recent works have shown improvements by merging image and video models by inflating 2D models to 3D (Carreira & Zisserman, 2017; Feichtenhofer et al., 2016), simpler 2D models (Wang et al., 2016b) still routinely appear among top performers in video benchmarks such as the Kinetics Challenge at CVPR’17. This raises the natural question: are videos trivially understandable by simply averaging the predictions over a sampled set of frames? At some level, the answer must be no. Reasoning about high-level cognitive concepts such as intentions, goals, and causal relations requires reasoning over long-term temporal structure and order (Shoham, 1987; Bobick, 1997). Consider, for example, the movie clip in Fig. 1 (a), where an actor leaves the table, grabs a firearm from another room, and returns. Even though no gun is visible in the final frames, an observer can easily infer that the actor is surreptitiously carrying the gun. Needless to say, any single frame from the video seems incapable of supporting that inference, and one needs to reason over space and time in order to reach that conclusion. As a simpler instance of the problem, consider the cup-and-balls magic routine1, or the gamblingbased shell game2, as shown in Fig. 1 (b). In these games, an operator puts a target object (ball) under one of multiple container objects (cups), and moves them about, possibly revealing the target at various times and recursively containing cups within other cups. The task at the end is to tell which of the cups is covering the ball. Even in its simplest instantiation, one can expect any human or computer system that solves this task to require the ability to model state of the world over long temporal horizons, reason about occlusion, understand the spatiotemporal implications of containment, etc. An important aspect of both our motivating examples is the adversarial nature of the task, ∗Now at Facebook AI Research 1https://en.wikipedia.org/wiki/Cups_and_balls 2https://en.wikipedia.org/wiki/Shell_game where the operator in control is trying to make the observer fail. Needless to say, a frame by frame prediction model would be incapable of solving such tasks. Given these motivating examples, why don’t spatiotemporal models dramatically outperform their static counterparts for video understanding? We posit that this is due to limitations of existing video benchmarks. Even though video datasets have evolved from the small regime with tens of labels (Soomro et al., 2012; Kuehne et al., 2011; Schuldt et al., 2004) to large with hundreds of labels (Sigurdsson et al., 2016; Kay et al., 2017), tasks have remained highly correlated to the scene and object context. For example, it is trivial to recognize a swimming action given a swimming pool in the background (He et al., 2016b). This is further reinforced by the fact that state of the art pose-based action recognition models (Yan et al., 2018) are outperformed by simpler frame-level models (Wang et al., 2016b) on the Kinetics (Kay et al., 2017) benchmark, with a difference of nearly 45% in accuracy! Sigurdsson et al. also found similar results for their Charades (Sigurdsson et al., 2016) benchmark, where adding ground truth object information gave the largest boosts to action recognition performance (Sigurdsson et al., 2017). In this work, we take an alternate approach to developing a video understanding dataset. Inspired by the recent CLEVR dataset (Johnson et al., 2017) (that explores spatial reasoning in tabletop scenes) and inspired by the adversarial parlor games above (that require temporal reasoning), we introduce CATER, a diagnostic dataset for Compositional Actions and TEmporal Reasoning in dynamic tabletop scenes. We define three tasks on the dataset, each with an increasingly higher level of complexity, but set up as classification problems in order to be comparable to existing benchmarks for easy transfer of existing models and approaches. Specifically, we consider primitive action recognition, compositional action recognition, and adversarial target tracking under occlusion and containment. However, note that this does not limit the usability of our dataset to these tasks, and we provide full metadata with the rendered videos that can be used for more complex, structured prediction tasks like detection, tracking, forecasting, and so on. Our dataset does not model an operator (or hand) moving the tabletop objects, though this could be simulated as well in future variants, as in (Rogez et al., 2015). Being synthetic, CATER can easily be scaled up in size and complexity. It also allows for detailed model diagnostics by controlling various dataset generation parameters. We use CATER to benchmark state-of-the-art video understanding models (Wang et al., 2018; 2016b; Hochreiter & Schmidhuber, 1997), and show even the best models struggle on our dataset. We also uncover some insights into the behavior of these models by changing parameters such as the temporal duration of an occlusion, the degree of camera motion, etc., which are difficult to both tune and label in real-world video data. 2 RELATED WORK Spatiotemporal networks: Video understanding for action recognition has evolved from iconic hand-designed models (Wang & Schmid, 2013; Laptev, 2005; Wang et al., 2011) to sophisticated Dataset Size Len Task #cls TO STR LTR CSB UCF101 (Soomro et al., 2012) 13K 7s cls 101 7 7 7 7 HMDB51 (Kuehne et al., 2011) 5K 4s cls 51 7 7 7 7 Kinetics (Kay et al., 2017) 300K 10s cls 400 7 3 7 7 AVA (Gu et al., 2018) 430 15m det 80 7 3 7 7 VLOGs (Fouhey et al., 2018) 114K 10s cls 30 7 3 7 7 DAHLIA (Vaquette et al., 2017) 51 39m det 7 3 3 3 7 TACoS (Regneri et al., 2013) 127 6m align - 3 3 3 7 DiDeMo (Anne Hendricks et al., 2017) 10K 30s align - 3 3 3 7 Charades (Sigurdsson et al., 2016) 10K 30s det 157 3 3 7 7 Something Something (Goyal et al., 2017) 108K 4s cls 174 3 3 7 3 Diving48 (Li et al., 2018) 18K 5s cls 48 3 3 7 3 Cooking (Rohrbach et al., 2012a) 44 3-41m cls 218 3 3 7 3 IKEA (Toyer et al., 2017) 101 2-4m gen - 3 3 3 3 Composite (Rohrbach et al., 2012b) 212 1-23m cls 44 3 3 3 3 TFGIF-QA (Jang et al., 2017) 72K 3s qa - 3 3 7 7 MovieQA (Tapaswi et al., 2016) 400 200s qa - 3 3 3 7 Robot Pushing (Finn et al., 2016) 57K 1s gen - 3 3 7 3 SVQA (Song et al., 2018) 12K 4s qa - 3 3 7 3 Moving MNIST (Srivastava et al., 2015) - 2s gen - 3 3 7 3 Flash MNIST (Long et al., 2018) 100K 2s cls 1024 7 3 7 3 CATER (ours) 5.5K 10s cls 36-301 3 3 3 3 spatiotemporal deep networks (Carreira & Zisserman, 2017; Simonyan & Zisserman, 2014; Girdhar et al., 2017; Wang et al., 2018; Xie et al., 2017; Tran et al., 2018; 2015). While similar developments in the image domain have lead to large improvements on tasks like classification (Szegedy et al., 2016; He et al., 2016a; Huang et al., 2017) and localization (He et al., 2017; Papandreou et al., 2017), video models have struggled to out-perform previous hand-crafted descriptors (Wang & Schmid, 2013). Even within the set of deep video architectures, models capable of temporal modeling, such as RNNs (Karpathy et al., 2014) and 3D convolutions (Tran et al., 2015; Varol et al., 2017a) have not shown significantly better performance than much simpler, per-frame prediction models, such as variants of two-stream architectures (Wang et al., 2016b; Simonyan & Zisserman, 2014). Though some recent works have shown improvements by merging image and video models by inflating 2D models to 3D (Carreira & Zisserman, 2017; Feichtenhofer et al., 2016), simple 2D models (Wang et al., 2016b) were still among the top performers in the Kinetics Challenge at CVPR’17. Video action understanding datasets: There has been significant effort put forth to collecting video benchmarks. One line of attack employs human actors to perform scripted actions. This is typically done in controlled environments (Schuldt et al., 2004; Shahroudy et al., 2016; Ionescu et al., 2014), but recent work has pursued online crowd sourcing (Goyal et al., 2017; Sigurdsson et al., 2016). Another direction collects videos from movies and online sharing platforms. Many popular video benchmarks follow this route for diverse, in-the-wild videos, such as UCF-101 (Soomro et al., 2012), HMDB-51 (Kuehne et al., 2011) and more recently Kinetics (Kay et al., 2017) and VLOGs (Fouhey et al., 2018). As discussed earlier, such datasets struggle with the strong bias of actions with scenes and objects. Our underlying thesis is that the field of video understanding is hampered by such biases because they favor image-based baselines. While some recent work (Goyal et al., 2017; Li et al., 2018) attempts to control for this bias, it still remains a challenge for long-term reasoning tasks. One might argue that since such biases are common in the visual world, video benchmarks should reflect them. We take the view that a diverse set of benchmarks are needed to enable comprehensive diagnostics and validation of the state-of-affairs in video understanding. Table 1 shows that CATER fills a missing gap in the benchmark landscape, most notably because of its size/video length, label distribution, relative resilience to object and scene bias, and diagnostic abilities. Synthetic data in computer vision: Our work, being synthetically generated, is also closely related to other works in using synthetic data for computer vision applications. There has been a large body of work in this direction, with the major focus on using synthetic training data for real world applications. This includes semantic scene understanding (Dosovitskiy et al., 2017; Shah et al., 2018; Richter et al., 2017), 3D scene understanding (Girdhar et al., 2016; Su et al., 2015; Wu et al., 2016; Song et al., 2017), human understanding (Varol et al., 2017b; De Souza et al., 2017), optical flow (Butler et al., 2012; Mayer et al., 2016) and navigation, RL or embodied learning (Wu et al., 2018; Kolve et al., 2017; Kempka et al., 2016; Mnih et al., 2013). Our work, on the other hand, attempts to develop a benchmark for video based action understanding. Similar attempts have been made for scene understanding through abstract scenes (Zitnick et al., 2016), with more recently focusing on building a complex reasoning benchmark, CLEVR (Johnson et al., 2017). In the video domain, benchmarks such as Flash-MNIST (Long et al., 2018), Moving MNIST (Srivastava et al., 2015) and SVQA (Song et al., 2018) have been proposed. Concurrent to us, CLEVRER (Yi et al., 2020), PHYRE (Bakhtin et al., 2019), COPHY (Baradel et al., 2020) and IntPhys (Riochet et al., 2018) benchmarks have been proposed with a focus on causal physical reasoning through QA, RL, prediction and ranking interfaces respectively. On the other hand, CATER focuses on spatiotemporal video reasoning tasks building upon CLEVR, with a simple classification interface, making it easily amenable for existing video understanding systems. Object tracking: Detecting and tracking objects has typically been used as an initial representation for long-term video and activity understanding (Shet et al., 2005; Hongeng et al., 2004; Lavee et al., 2009). Extensions include adversarial tracking, where the objects are designed to be hidden from plain view. It has typically been used for tasks such as determining if humans are carrying an object (Dondera et al., 2013; Ferrando et al., 2006) or abandoned / exchanging objects (Tian et al., 2011; Li et al., 2006). We embrace this direction of work and include state-of-the-art deep trackers (Zhu et al., 2018) in our benchmark evaluation. 3 THE CATER DATASET CATER provides a video understanding dataset that requires long term temporal reasoning to be solved. Additionally, it provides diagnostic tools that can evaluate video models in specific scenarios, such as with or without camera motion, with varying number of objects and so on. This control over the dataset parameters is achieved by synthetically rendering the data. These videos come with a ground truth structure that can be used to design various different video understanding tasks, including but not limited to object localization and spatiotemporal action composition. Unlike existing video understanding benchmarks, this dataset is free of object or scene bias, as the same set of simple objects are used to render the videos. Fig. 2 describes the dataset and the associated tasks. We provide sample videos from the dataset in the supplementary video. Objects: The CATER universe is built upon CLEVR (Johnson et al., 2017), inheriting most of the standard object shapes, sizes, colors and materials present in it. This includes three object shapes (cube, sphere, cylinder), in three sizes (small, medium, large), two materials (shiny metal and matte rubber) and eight colors, as well as a large “table” plane on which all objects are placed. In addition to these objects, we add two new object shapes: inverted cones and a special object called a ‘snitch’. Cones also come in the same set of sizes, materials and colors. The ‘snitch’ is a special object shaped like three intertwined toruses in metallic gold color. Actions: We define four atomic actions: ‘rotate’, ‘pick-place’, ‘slide’ and ‘contain’; a subset of which is afforded by each object. The ‘rotate’ action means that the object rotates by 90°about its Y (or horizontal) axis, and is afforded by cubes, cylinders and the snitch. The ‘pick-place’ action means the object is picked up into the air along the Y axis, moved to a new position, and placed down. This is afforded by all objects. The ‘slide’ action means the object is moved to a new location by sliding along the bottom surface, and is also afforded by all objects. Finally, ‘contain’ is a special operation, only afforded by the cones, in which a cone is pick-placed on top of another object, which may be a sphere, a snitch or even a smaller cone. This allows for recursive containment, as a cone can contain a smaller cone that contains another object. Once a cone ‘contains’ an object, it is constrained to only ‘slide’ actions and effectively slides all objects contained within the cone. This holds until the top-most cone is pick-placed to another location, effectively ending the containment for that top-most cone. Animation process: We start with an initial setup similar to CLEVR. A random number (N ) of objects with random parameters are spawned at random locations at the beginning of the video. They exist on a 6× 6 portion of a 2D plane with the global origin in the center. In addition to the random objects, we ensure that every video has a snitch and a cone. For the purposes of this work, we render 300-frame 320x240px videos, at 24 FPS, making it comparable to standard benchmarks (Soomro et al., 2012; Kuehne et al., 2011; Kay et al., 2017). We split the video into 30-frame slots, and each action is contained within these slots. At the beginning of each slot, we iterate through up to K objects in a random order and attempt to add an action afforded by that object one by one without colliding with another object. As we describe later, we use K = 2 for our initial tasks and K = N for the final task. For each action, we pick a random start and end time from within the 30-frame slot. To further add to the diagnostic ability of this dataset, we render an additional set of videos with camera motion, with all other aspects of the data similarly distributed as the static camera case. For this, the camera is always kept pointed towards the global origin, and moved randomly between a predefined set of 3D coordinates. These coordinates include X and Y ∈ {−10, 10} and Z ∈ {8, 10, 12}. Every 30 frames, we randomly pick a new location from the Cartesian product of X,Y, Z, and move the camera to that location over the next 30 frames. However, we do constrain the camera to not change both X and Y coordinates at the same time, as that causes a jarring viewpoint shift as the camera passes over the (0, 0, Z) point. Also, we ensure all the camera motion videos start from the same viewpoint, to make it easy to register the axes locations for localization task. Spatiotemporal compositions: We wish to label our animations with the atomic actions present, as well as their compositions. Atomic actions have a well-defined spatiotemporal footprint, and so we can define composites using spatial relations (“a cylinder is rotating behind a sliding red ball”), similar to CLEVR. Unique to CATER is the ability to designate temporal relationships (“a cylinder rotates before a ball is picked-and-placed”). Because atomic actions occupy a well-defined temporal extent, we need temporal logic that reasons about relations between intervals rather than instantaneous events. While the latter can be dealt with timestamps, the former can be described with Allen’s interval algebra with thirteen basic relations (Figure 3) along with composition operations. For simplicity, we group those into three broad relations. However, our dataset contains examples of all such interval relations and can be used to explore fine-grained temporal relationships. 3.1 TASKS DEFINED ON THE DATASET Given this CATER universe with videos, ground truth objects and their actions at any time point, we can define arbitrarily complex tasks for a video understanding system. Our choice of tasks is informed by two of the main goals of video understanding: 1) Recognizing the states of the actor, including spatiotemporal compositions of those atomic actions. For example, a spatiotemporal composition of atomic human body movements can be described as an exercise or dance routine. And 2) Recognizing the effect of those actions on the state of the world. For example, an action involving picking and placing a cup would change the position of the cup and any constituent objects contained within it, and understanding this change in the world state would implicitly require understanding the action itself. Given these two goals, we define three tasks on CATER. Each has progressively higher complexity, and tests for a higher level reasoning ability. To be consistent with existing popular benchmarks (Soomro et al., 2012; Kuehne et al., 2011; Kay et al., 2017; Sigurdsson et al., 2016), we stick to standard single or multi-label classification setup, with standard evaluation metrics, as described next. For each of these tasks, we start by rendering 5500 total videos, to be comparable in size with existing popular benchmarks (Kuehne et al., 2011). Since tasks 1 and 2 (defined next) explicitly require recognizing individual actions, we use K = 2 for the videos rendered to keep the number of actions happening in any given video small. For task 3, we set K = N as the task is to recognize the end effect of actions, and not necessarily the actions themselves. We split the data randomly in 70:30 ratio into a training and test set. We similarly render a same size dataset with camera motion, and define tasks and splits in the same way as for the static camera. With the code release we also provide a further split of train set into a validation set (80:20). While we focus on the following tasks in this paper, note that the data is amenable to many other tasks, for instance (Malinowski et al., 2020) uses CATER for video reconstruction. Task 1: Atomic action recognition. This first task on CATER is primarily designed as a simple debugging task, which should be easy for contemporary models to solve. Given the combinations of object shapes and actions afforded by them, we define 14 classes such as ‘slide(cone)’, ‘rotate(cube)’ and so on. Since each video can have multiple actions, we define it as a multi-label classification problem. The task is to produce 14 probability values, denoting the likelihood of that action happening in the video. The performance is evaluated using average precision per-class. Final dataset-level performance is computed by mean over all classes, to get mean average precision (mAP). This is a popular metric used in other multi-label action classification datasets (Sigurdsson et al., 2016; Gu et al., 2018). Task 2: Compositional action recognition. While recognizing individual objects and motions is important, it is clearly not enough. Real world actions tend to be composite in nature, and humans have no difficulty recognizing them in whole or in parts. To that end, we construct a compositional action recognition task through spatiotemporal composition of the basic actions used in Task 1. For simplicity, we limit composites to pairs of 14 atomic actions, where the temporal relation is grouped into broad categories of ‘before’, ‘during’ and ‘after’ as shown in Figure 3. Combining all possible atomic actions with the three possible relations, we get a total of 14 × 14 × 3 = 588 classes, and removing duplicates (such as ‘X after Y’ is a duplicate of ‘Y before X’), leaves 301 classes. Similar to task 1, multiple compositions can be active in any given video, so we set it up as a multi-label classification problem, evaluated using mAP. If certain compositions never occur in the dataset, those are ignored for the final evaluation. Task 3: Snitch localization. The final, and the flagship task in CATER, tests models’ ability to recognize the effect of actions on the environment. Just as in the case of cup-and-ball trick, the ability of a model to recognize location of objects after some activity can be thought of as an implicit evaluation of its ability to understand the activity itself. The task now is to predict the location of the special object introduced above, the Snitch. While it may seem trivial to localize it from the last frame, it may not always be possible to do that due to occlusions and recursive containments. The snitch can be contained by other objects (cones), which can further be contained by other larger cones. All objects move together until ‘uncontained’, so the final location of the snitch would require long range reasoning about these interactions. For simplicity, we pose this as a classification problem by quantizing the 6 × 6 grid into 36 cells and asking which cell the snitch is in, at the end of the video. We ablate the grid size in experiments. Since the snitch can only be at a single location at the end of the video, we setup the problem as a single label classification, and evaluate it using standard percentage accuracy metrics such as top-1 and top-5 accuracy. However, one issue with this metric is that is would penalize predictions where the snitch is slightly over the cell boundaries. While the top-5 metric is somewhat robust to this issue, we also report mean L1 distance of predicted grid cell from the ground truth, as a metric that is congnizant of the grid structure in this task. Hence, it would penalize confusion between adjacent cells less than those between distant cells. The data is also amenable to a purely regression-style evaluation, though we leave that to future work. 4 EXPERIMENTS We now experiment with CATER using recently introduced state of the art video understanding and temporal reasoning models (Carreira & Zisserman, 2017; Wang et al., 2018; 2016b; Hochreiter & Schmidhuber, 1997). I3D (Carreira & Zisserman, 2017), called R3D when implemented using a ResNet (He et al., 2016a) in (Wang et al., 2018), brings the best of image models to video domain by inflating it into 3D for spatiotemporal feature learning. Non-local networks (Wang et al., 2018) further build upon that to add a spatiotemporal interaction layer that gives strong improvements and out-performs many multi-stream architectures (that use audio, flow etc) on Kinetics and Charades benchmarks. For our main task, snitch localization, we also experiment with a 2D-conv based approach, Temporal Segment Networks (TSN) (Wang et al., 2016b), which another top performing method on standard benchmarks (Kay et al., 2017). This approaches uses both RGB and flow modalities. All these architectures learn a model for individual frames or short clips, and at test time aggregate the predictions by averaging over those clips. While simple averaging works well enough on most recent datasets (Kay et al., 2017; Soomro et al., 2012; Kuehne et al., 2011), it clearly loses all temporal information and may not be well suited to our set of tasks. Hence, we also experiment with a learned aggregation strategy: specifically using an LSTM (Hochreiter & Schmidhuber, 1997) for aggregation, which is the tool of choice for temporal modelling in various domains including language and audio. We use a common LSTM implementation for aggregating either (Wang et al., 2016b) or (Wang et al., 2018) that operates on the last layer features (before logits). We extract these features for subclips from train and test videos, and train a 2-layer LSTM with 512 hidden units in each layer on the train subclips. The LSTM produces an output at each clip it sees, and we enforce a classification loss at the end, once the model has seen all the clips. At test time we take the prediction from the last clip as the aggregated prediction. We report the LSTM performance averaged over three runs to control for random variation. It is worth noting that LSTMs have been previously used for action recognition in videos (Donahue et al., 2015; Karpathy et al., 2014), however with only marginal success over simple average pooling. As we show later, LSTMs actually perform significantly better on CATER, indicating the importance of temporal reasoning. For task 3, we also experiment with a state-of-the-art visual tracking method (Zhu et al., 2018). We start by using the GT information of the starting position of snitch, and project it to screen coordinates using the render camera parameters. We defined a fixed size box around it to initialize the tracker, and run it until the end of the video. At the last frame, we project the center point of the tracked box to the 3D plane (and eventually, the class label) by using a homography transformation between the image and the 3D plane. This provides a more traditional, symbolic reasoning baseline for our dataset, and as we show in results, is also not enough to solve the task. Finally, we do note that many other video models have been proposed in literature involving 2.5D convolutions (Tran et al., 2018; Xie et al., 2017), VLAD-style aggregation (Girdhar et al., 2017; Miech et al., 2017) and other multi-modal architectures (Wang et al., 2016a; Bian et al., 2017). We focus on the most popular and best performing models, and leave a more comprehensive study to future work. A random baseline is also provided for all tasks, computed as the average performance of random scores passed into the evaluation functions. Implementation details for all baselines are provided in the supplementary and code will be released. Task 1: Atomic action recognition: Table 2 (a) shows the performance of R3D with and without the non-local (NL) blocks, using different number of frames in the clips. We use a fixed sampling rate of 8, but experiment with different clip sizes. Adding more frames helps significantly in this case. Given the ease of the task, R3D obtains fairly strong performance for static camera, but not so much for moving camera, suggesting potential future work in building models agnostic to camera motion. Table 2: Performance on the (a) 14-way atomic actions recognition, (b) 301-way compositional action recognition, and (c) 36-way localization task, for different methods. (a) Task 1 (Atomic) Camera Model NL #frames mAP - Random - - 56.2 Static R3D 8 89.0 Static R3D X 8 88.8 Static R3D 32 98.8 Static R3D X 32 98.9 Moving R3D 8 82.4 Moving R3D X 8 82.7 Moving R3D 32 90.5 Moving R3D X 32 90.2 (b) Task 2 (Compositional) Camera Model NL #frames mAP Avg LSTM - Random - - 19.5 19.5 Static R3D 8 39.5 52.1 Static R3D 32 44.2 53.4 Static R3D X 32 45.9 53.1 Static R3D 64 43.7 43.5 Moving R3D 32 40.9 43.2 Moving R3D X 32 41.1 43.5 (c) Task 3 (Localization) Camera Model #frames SR Avg LSTM Top 1 Top 5 L1 Top 1 Top 5 L1 - Random - - 2.8 13.8 3.9 2.8 13.8 3.9 Static Tracking - - 33.9 - 2.4 33.9 - 2.4 Static TSN (RGB) 1 - 7.4 27.0 3.9 15.3 50.0 3.0 Static TSN (RGB) 3 - 14.1 38.5 3.2 25.6 67.2 2.6 Static TSN (Flow) 1 - 6.2 21.7 4.4 7.3 26.9 4.1 Static TSN (Flow) 3 - 9.6 32.2 3.7 14.0 43.5 3.2 Static R3D 8 8 24.0 54.8 2.7 34.2 64.6 1.8 Static R3D 16 8 26.2 56.3 2.6 24.2 48.9 2.5 Static R3D 32 8 28.8 68.7 2.6 45.5 67.7 1.6 Static R3D 64 8 57.4 78.4 1.4 60.2 81.8 1.2 Static R3D + NL 32 8 26.7 68.9 2.6 46.2 69.9 1.5 Moving R3D 32 8 23.4 61.1 2.5 28.6 63.3 1.7 Moving R3D + NL 32 8 27.5 68.8 2.4 38.6 70.2 1.5 Models Kinetics UCF-101 HMDB-51 CATER 1 frame (RGB) (Donahue et al., 2015) - 67.4 - 7.4 LSTM (RGB) (Donahue et al., 2015) - 68.2 - 15.3 TSN (RGB) (Wang et al., 2016b) 72.5 93.2 51.0 14.1 TSN (Flow) (Wang et al., 2016b) 62.8 95.3 64.2 9.6 2S I3D (Carreira & Zisserman, 2017) 75.7 98.0 80.7 - 2S R(2+1)D (Tran et al., 2018) 75.4 97.3 78.7 - R3D(+NL) (Wang et al., 2018) 77.7 - - 57.4 Table 3: Long term reasoning. Comparing the best reported performance of standard models on existing datasets and CATER (task 3). Unlike previous benchmarks, (1) temporal modeling using LSTM helps and (2) local temporal cues (flow) are not effective by itself on CATER. 2S here refers to ‘Two Stream’. TSN performance from (Xiong, 2017; 2016). Task 2: Compositional action recognition: Next we experiment with the compositional action recognition task. The training and testing is done in the same way as Task 1, except this predicts confidence over 301 classes. As evident from Table 2 (b), this task is harder for the existing models, presumably as recognizing objects and simple motions would no longer solve it, and models need to reason about spatiotemporal compositions as well. It is interesting to note that non-local blocks now add to the final performance, which was not the case for Task 1, suggesting modeling spatiotemporal relations is more useful for this task. LSTM aggregation also helps quite a bit as the model can learn to reason about long-range temporal compositions. As expected, moving camera makes the problem harder. Task 3: Snitch localization: Finally we turn to the localization task. Since this is setup as a single label classification, we use softmax cross entropy loss to train and classification accuracy for evaluation. For tracking, no training is required as we use the pre-trained model from (Zhu et al., 2018) and run it on the validation videos. Table 2 (c) shows the performance of various methods, evaluated at different clip lengths and frame rates. For this task we also experiment with TSN (Wang et al., 2016b), though it ends up performing significantly worse than R3D. Note that this contrasts with standard video datasets (Kay et al., 2017), where it tends to perform similar to R3D (Xiong, 2017). We also experiment with the flow modality and observe it obtains even lower performance, which is expected as this task requires recognizing objects which is much harder from flow. Again, note that flow models obtain similar if not better performance as RGB on standard datasets (Kay et al., 2017; Xiong, 2017). We also note higher performance on considering longer clips with higher sample rate. This is not surprising as a task like this would require long term temporal reasoning, which is aided by looking at longer videos. This is also reinforced by the observation that using LSTM for aggregation leads to a major improvement in performance for most models. Finally, the tracking approach also only solves about a third of the videos, as even the state of the art tracker ends up drifting due to occlusions and contain operations. In Table 4, we ablate the performance with respect to the underlying grid granularity, with 6 × 6 being the default used in Table 2 (c). We observe tracking is a stronger baseline as the localization task gets more fine-grained. Finally in Table 3 we compare performance of some of these models on existing benchmarks and CATER. Analysis: Having close control over the dataset generation process enables us to perform diagnostics impossible with any previous dataset. We use the R3D+NL, 32-frame, static camera model with average (or LSTM, when specified) pooling for all following visualizations. We first analyze aggregate performance of our model over multiple bins in Figure 4, and observe some interesting phenomena. (a) Performance drops if the snitch keeps moving until the end. This makes sense: if the snitch reaches its final position early in the video, models have a lot more frames to reinforce their hypothesis of its final location. Between LSTM and avg-pooling, LSTM is much better able to handle the motion of the snitch, as expected. Perhaps not surprisingly, the tracker is much less effected by snitch movement, indicating the power of such classic computational pipelines for longterm spatiotemporal understanding. (b) Drops if the snitch is contained in the end. Being contained in the final frame makes the snitch harder to spot and track (just like the cups and ball game!), hence the lower performance. Next, we visualize the videos that our models gets right or wrong. We sort all validation videos based on the softmax confidence score for the ground truth class, and visualize the top and bottom six in Figure 5 (full video in supplementary). We find that the easiest videos for avg-pooled model tend to be ones with little snitch motion, i.e. the object stays at the position it starts off in. On the other hand, the LSTM-aggregated model fares better with snitch motion, as long as it happens early in the video. The hardest videos for both tend to be ones with sudden motion of the snitch towards the end of the video, as shown by the bright golden trail denoting the motion towards the end (better viewed in supplementary video). These observations are supported by the quantitative plots in Figure 4 (a) and (c). 5 CONCLUSION We use CATER to analyze several leading network designs on hard spatiotemporal tasks. We find most models struggle on our proposed dataset, especially on the snitch localization task which requires long term reasoning. Interestingly, average pooling clip predictions or short temporal cues (optical flow) perform rather poorly on CATER, unlike most previous benchmarks. Such temporal reasoning challenges are common in the real world (eg. Fig. 1 (a)), and solving those would be the cornerstone of the next improvements in machine video understanding. We believe CATER would serve as an intermediary in building systems that will reason over space and time to understand actions. That said, CATER is, by no means, a complete solution to the video understanding problem. Like any other synthetic or simulated dataset, it should be considered in addition to real world benchmarks. While we have focused on classification tasks for simplicity, our fully-annotated dataset can be used for much richer parsing tasks such as spacetime action localization. One of our findings is that while high-level semantic tasks such as activity recognition may be addressable with current architectures given a richly labeled dataset, “mid-level” tasks such as tracking still pose tremendous challenges, particularly under long-term occlusions and containment. We believe addressing such challenges will enable broader temporal reasoning tasks that capture intentions, goals, and causal behavior. ACKNOWLEDGMENTS Authors would like to thank Ishan Misra for many helpful discussions and help with systems. This research is based upon work supported in part by NSF Grant 1618903. B TRAIN/VAL DISTRIBUTIONS Figure 6 shows the data distribution over classes for each of the tasks. C VIDEO VISUALIZATION The supplementary video3 visualizes: 1. Sample videos from the dataset (with and without camera motion). 2. Easiest and hardest videos for task 3. We rank all validation videos for task 3 based on their softmax probability for the correct class. We show the top-6 (easiest) and bottom-6 (hardest) for 32-frame stride-8 non-local + LSTM model. We observe the hardest ones involve sudden motion towards the end of the video. This reinforces the observation made in Figure 5(a) in the main paper, that videos where snitch keeps moving till the end are the hardest. If the snitch stops moving earlier, models have more evidence for the final location of the snitch, making the task easier. 3. Tracking results. We visualize the results of tracking the snitch over the video as one approach to solving task 3. We observe that while it works in the simple scenarios, it fails when there is a lot of occlusion or complex contain operations. 4. Model bottom-up attention. We visualize where does the model look for Task 3. As suggested in (Malinowski et al., 2018), we visualize the l2-norm of the last layer features from our 32-frame stride-8 non-local model on the center video crop. The deep red color denotes large norm value at that spatiotemporal location. We find that the model automatically learns to focus on the snitch towards the end of clips, which makes sense as that is the most important object for solving the localization task. 3https://rohitgirdhar.github.io/CATER/assets/suppl/video.mp4
1. What are the strengths and weaknesses of the proposed dataset compared to existing ones? 2. How does the paper address the issue of scene biases in videos? 3. Can the authors provide more information on the resolution of the generated videos? 4. Did the authors consider downsampling the videos to enable processing all frames? 5. Why did the authors choose not to use more than 64 frames for Task 3, given the improvement with additional frames? 6. How do the proposed tasks account for temporal ordering, short- and long-term reasoning, and control for scene biases? 7. Are there any plans to expand the dataset to include more diverse objects and actions?
Review
Review This paper introduces a new synthetic video understanding dataset, borrowing many ideas from the visual question answering dataset CLEVR. The new dataset is the first to account for all of the following fundamental aspect of videos: temporal ordering, short- and long term reasoning, and control for scene biases. Due to the inherent biases in available action recognition datasets, models that simply averages video frames do nearly as well as models that take temporal dependencies into account. In contrast, the authors show that with the proposed dataset, models without spatiotemporal reasoning largely fail. The paper should be accepted as it addresses a major shortcoming of all existing video understanding datasets. It does a good job at summarizing the deficiencies in existing datasets, clearly motivating the need for a new dataset. The claims are backed up with solid experiments, ablating models and data parameters adequately. It is mostly well-written (except for section 4 which would benefit from extensive proofreading) and does a good job at covering relevant work. One drawback is of course the synthetic nature and limited domain of objects and actions. On the other hand, this makes the setup highly controllable and reliable. I like the fact that each task comes both with both static and moving camera. Improvements and Questions: Some relevant datasets are missing. For example, the Moving MNIST and Robot Pushing datasets could be added to Table 1. I suggest having a train / validation / test split (like CLEVR), rather than just a train and validation split. In particular for Task 3 more frames seem to give dramatic improvement. Why did you not run with more than 64 frames? Did you consider downsampling the videos to allow running on all the frames? I’m missing details on the resolution of the generated videos?
ICLR
Title CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning Abstract Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets. However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements. Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive. We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure. In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved. Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning. In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable. Using CATER, we provide insights into some of the most recent state of the art deep video architectures. 1 INTRODUCTION While deep features have revolutionized static image analysis, video descriptors have struggled to outperform classic hand-crafted descriptors (Wang & Schmid, 2013). Though recent works have shown improvements by merging image and video models by inflating 2D models to 3D (Carreira & Zisserman, 2017; Feichtenhofer et al., 2016), simpler 2D models (Wang et al., 2016b) still routinely appear among top performers in video benchmarks such as the Kinetics Challenge at CVPR’17. This raises the natural question: are videos trivially understandable by simply averaging the predictions over a sampled set of frames? At some level, the answer must be no. Reasoning about high-level cognitive concepts such as intentions, goals, and causal relations requires reasoning over long-term temporal structure and order (Shoham, 1987; Bobick, 1997). Consider, for example, the movie clip in Fig. 1 (a), where an actor leaves the table, grabs a firearm from another room, and returns. Even though no gun is visible in the final frames, an observer can easily infer that the actor is surreptitiously carrying the gun. Needless to say, any single frame from the video seems incapable of supporting that inference, and one needs to reason over space and time in order to reach that conclusion. As a simpler instance of the problem, consider the cup-and-balls magic routine1, or the gamblingbased shell game2, as shown in Fig. 1 (b). In these games, an operator puts a target object (ball) under one of multiple container objects (cups), and moves them about, possibly revealing the target at various times and recursively containing cups within other cups. The task at the end is to tell which of the cups is covering the ball. Even in its simplest instantiation, one can expect any human or computer system that solves this task to require the ability to model state of the world over long temporal horizons, reason about occlusion, understand the spatiotemporal implications of containment, etc. An important aspect of both our motivating examples is the adversarial nature of the task, ∗Now at Facebook AI Research 1https://en.wikipedia.org/wiki/Cups_and_balls 2https://en.wikipedia.org/wiki/Shell_game where the operator in control is trying to make the observer fail. Needless to say, a frame by frame prediction model would be incapable of solving such tasks. Given these motivating examples, why don’t spatiotemporal models dramatically outperform their static counterparts for video understanding? We posit that this is due to limitations of existing video benchmarks. Even though video datasets have evolved from the small regime with tens of labels (Soomro et al., 2012; Kuehne et al., 2011; Schuldt et al., 2004) to large with hundreds of labels (Sigurdsson et al., 2016; Kay et al., 2017), tasks have remained highly correlated to the scene and object context. For example, it is trivial to recognize a swimming action given a swimming pool in the background (He et al., 2016b). This is further reinforced by the fact that state of the art pose-based action recognition models (Yan et al., 2018) are outperformed by simpler frame-level models (Wang et al., 2016b) on the Kinetics (Kay et al., 2017) benchmark, with a difference of nearly 45% in accuracy! Sigurdsson et al. also found similar results for their Charades (Sigurdsson et al., 2016) benchmark, where adding ground truth object information gave the largest boosts to action recognition performance (Sigurdsson et al., 2017). In this work, we take an alternate approach to developing a video understanding dataset. Inspired by the recent CLEVR dataset (Johnson et al., 2017) (that explores spatial reasoning in tabletop scenes) and inspired by the adversarial parlor games above (that require temporal reasoning), we introduce CATER, a diagnostic dataset for Compositional Actions and TEmporal Reasoning in dynamic tabletop scenes. We define three tasks on the dataset, each with an increasingly higher level of complexity, but set up as classification problems in order to be comparable to existing benchmarks for easy transfer of existing models and approaches. Specifically, we consider primitive action recognition, compositional action recognition, and adversarial target tracking under occlusion and containment. However, note that this does not limit the usability of our dataset to these tasks, and we provide full metadata with the rendered videos that can be used for more complex, structured prediction tasks like detection, tracking, forecasting, and so on. Our dataset does not model an operator (or hand) moving the tabletop objects, though this could be simulated as well in future variants, as in (Rogez et al., 2015). Being synthetic, CATER can easily be scaled up in size and complexity. It also allows for detailed model diagnostics by controlling various dataset generation parameters. We use CATER to benchmark state-of-the-art video understanding models (Wang et al., 2018; 2016b; Hochreiter & Schmidhuber, 1997), and show even the best models struggle on our dataset. We also uncover some insights into the behavior of these models by changing parameters such as the temporal duration of an occlusion, the degree of camera motion, etc., which are difficult to both tune and label in real-world video data. 2 RELATED WORK Spatiotemporal networks: Video understanding for action recognition has evolved from iconic hand-designed models (Wang & Schmid, 2013; Laptev, 2005; Wang et al., 2011) to sophisticated Dataset Size Len Task #cls TO STR LTR CSB UCF101 (Soomro et al., 2012) 13K 7s cls 101 7 7 7 7 HMDB51 (Kuehne et al., 2011) 5K 4s cls 51 7 7 7 7 Kinetics (Kay et al., 2017) 300K 10s cls 400 7 3 7 7 AVA (Gu et al., 2018) 430 15m det 80 7 3 7 7 VLOGs (Fouhey et al., 2018) 114K 10s cls 30 7 3 7 7 DAHLIA (Vaquette et al., 2017) 51 39m det 7 3 3 3 7 TACoS (Regneri et al., 2013) 127 6m align - 3 3 3 7 DiDeMo (Anne Hendricks et al., 2017) 10K 30s align - 3 3 3 7 Charades (Sigurdsson et al., 2016) 10K 30s det 157 3 3 7 7 Something Something (Goyal et al., 2017) 108K 4s cls 174 3 3 7 3 Diving48 (Li et al., 2018) 18K 5s cls 48 3 3 7 3 Cooking (Rohrbach et al., 2012a) 44 3-41m cls 218 3 3 7 3 IKEA (Toyer et al., 2017) 101 2-4m gen - 3 3 3 3 Composite (Rohrbach et al., 2012b) 212 1-23m cls 44 3 3 3 3 TFGIF-QA (Jang et al., 2017) 72K 3s qa - 3 3 7 7 MovieQA (Tapaswi et al., 2016) 400 200s qa - 3 3 3 7 Robot Pushing (Finn et al., 2016) 57K 1s gen - 3 3 7 3 SVQA (Song et al., 2018) 12K 4s qa - 3 3 7 3 Moving MNIST (Srivastava et al., 2015) - 2s gen - 3 3 7 3 Flash MNIST (Long et al., 2018) 100K 2s cls 1024 7 3 7 3 CATER (ours) 5.5K 10s cls 36-301 3 3 3 3 spatiotemporal deep networks (Carreira & Zisserman, 2017; Simonyan & Zisserman, 2014; Girdhar et al., 2017; Wang et al., 2018; Xie et al., 2017; Tran et al., 2018; 2015). While similar developments in the image domain have lead to large improvements on tasks like classification (Szegedy et al., 2016; He et al., 2016a; Huang et al., 2017) and localization (He et al., 2017; Papandreou et al., 2017), video models have struggled to out-perform previous hand-crafted descriptors (Wang & Schmid, 2013). Even within the set of deep video architectures, models capable of temporal modeling, such as RNNs (Karpathy et al., 2014) and 3D convolutions (Tran et al., 2015; Varol et al., 2017a) have not shown significantly better performance than much simpler, per-frame prediction models, such as variants of two-stream architectures (Wang et al., 2016b; Simonyan & Zisserman, 2014). Though some recent works have shown improvements by merging image and video models by inflating 2D models to 3D (Carreira & Zisserman, 2017; Feichtenhofer et al., 2016), simple 2D models (Wang et al., 2016b) were still among the top performers in the Kinetics Challenge at CVPR’17. Video action understanding datasets: There has been significant effort put forth to collecting video benchmarks. One line of attack employs human actors to perform scripted actions. This is typically done in controlled environments (Schuldt et al., 2004; Shahroudy et al., 2016; Ionescu et al., 2014), but recent work has pursued online crowd sourcing (Goyal et al., 2017; Sigurdsson et al., 2016). Another direction collects videos from movies and online sharing platforms. Many popular video benchmarks follow this route for diverse, in-the-wild videos, such as UCF-101 (Soomro et al., 2012), HMDB-51 (Kuehne et al., 2011) and more recently Kinetics (Kay et al., 2017) and VLOGs (Fouhey et al., 2018). As discussed earlier, such datasets struggle with the strong bias of actions with scenes and objects. Our underlying thesis is that the field of video understanding is hampered by such biases because they favor image-based baselines. While some recent work (Goyal et al., 2017; Li et al., 2018) attempts to control for this bias, it still remains a challenge for long-term reasoning tasks. One might argue that since such biases are common in the visual world, video benchmarks should reflect them. We take the view that a diverse set of benchmarks are needed to enable comprehensive diagnostics and validation of the state-of-affairs in video understanding. Table 1 shows that CATER fills a missing gap in the benchmark landscape, most notably because of its size/video length, label distribution, relative resilience to object and scene bias, and diagnostic abilities. Synthetic data in computer vision: Our work, being synthetically generated, is also closely related to other works in using synthetic data for computer vision applications. There has been a large body of work in this direction, with the major focus on using synthetic training data for real world applications. This includes semantic scene understanding (Dosovitskiy et al., 2017; Shah et al., 2018; Richter et al., 2017), 3D scene understanding (Girdhar et al., 2016; Su et al., 2015; Wu et al., 2016; Song et al., 2017), human understanding (Varol et al., 2017b; De Souza et al., 2017), optical flow (Butler et al., 2012; Mayer et al., 2016) and navigation, RL or embodied learning (Wu et al., 2018; Kolve et al., 2017; Kempka et al., 2016; Mnih et al., 2013). Our work, on the other hand, attempts to develop a benchmark for video based action understanding. Similar attempts have been made for scene understanding through abstract scenes (Zitnick et al., 2016), with more recently focusing on building a complex reasoning benchmark, CLEVR (Johnson et al., 2017). In the video domain, benchmarks such as Flash-MNIST (Long et al., 2018), Moving MNIST (Srivastava et al., 2015) and SVQA (Song et al., 2018) have been proposed. Concurrent to us, CLEVRER (Yi et al., 2020), PHYRE (Bakhtin et al., 2019), COPHY (Baradel et al., 2020) and IntPhys (Riochet et al., 2018) benchmarks have been proposed with a focus on causal physical reasoning through QA, RL, prediction and ranking interfaces respectively. On the other hand, CATER focuses on spatiotemporal video reasoning tasks building upon CLEVR, with a simple classification interface, making it easily amenable for existing video understanding systems. Object tracking: Detecting and tracking objects has typically been used as an initial representation for long-term video and activity understanding (Shet et al., 2005; Hongeng et al., 2004; Lavee et al., 2009). Extensions include adversarial tracking, where the objects are designed to be hidden from plain view. It has typically been used for tasks such as determining if humans are carrying an object (Dondera et al., 2013; Ferrando et al., 2006) or abandoned / exchanging objects (Tian et al., 2011; Li et al., 2006). We embrace this direction of work and include state-of-the-art deep trackers (Zhu et al., 2018) in our benchmark evaluation. 3 THE CATER DATASET CATER provides a video understanding dataset that requires long term temporal reasoning to be solved. Additionally, it provides diagnostic tools that can evaluate video models in specific scenarios, such as with or without camera motion, with varying number of objects and so on. This control over the dataset parameters is achieved by synthetically rendering the data. These videos come with a ground truth structure that can be used to design various different video understanding tasks, including but not limited to object localization and spatiotemporal action composition. Unlike existing video understanding benchmarks, this dataset is free of object or scene bias, as the same set of simple objects are used to render the videos. Fig. 2 describes the dataset and the associated tasks. We provide sample videos from the dataset in the supplementary video. Objects: The CATER universe is built upon CLEVR (Johnson et al., 2017), inheriting most of the standard object shapes, sizes, colors and materials present in it. This includes three object shapes (cube, sphere, cylinder), in three sizes (small, medium, large), two materials (shiny metal and matte rubber) and eight colors, as well as a large “table” plane on which all objects are placed. In addition to these objects, we add two new object shapes: inverted cones and a special object called a ‘snitch’. Cones also come in the same set of sizes, materials and colors. The ‘snitch’ is a special object shaped like three intertwined toruses in metallic gold color. Actions: We define four atomic actions: ‘rotate’, ‘pick-place’, ‘slide’ and ‘contain’; a subset of which is afforded by each object. The ‘rotate’ action means that the object rotates by 90°about its Y (or horizontal) axis, and is afforded by cubes, cylinders and the snitch. The ‘pick-place’ action means the object is picked up into the air along the Y axis, moved to a new position, and placed down. This is afforded by all objects. The ‘slide’ action means the object is moved to a new location by sliding along the bottom surface, and is also afforded by all objects. Finally, ‘contain’ is a special operation, only afforded by the cones, in which a cone is pick-placed on top of another object, which may be a sphere, a snitch or even a smaller cone. This allows for recursive containment, as a cone can contain a smaller cone that contains another object. Once a cone ‘contains’ an object, it is constrained to only ‘slide’ actions and effectively slides all objects contained within the cone. This holds until the top-most cone is pick-placed to another location, effectively ending the containment for that top-most cone. Animation process: We start with an initial setup similar to CLEVR. A random number (N ) of objects with random parameters are spawned at random locations at the beginning of the video. They exist on a 6× 6 portion of a 2D plane with the global origin in the center. In addition to the random objects, we ensure that every video has a snitch and a cone. For the purposes of this work, we render 300-frame 320x240px videos, at 24 FPS, making it comparable to standard benchmarks (Soomro et al., 2012; Kuehne et al., 2011; Kay et al., 2017). We split the video into 30-frame slots, and each action is contained within these slots. At the beginning of each slot, we iterate through up to K objects in a random order and attempt to add an action afforded by that object one by one without colliding with another object. As we describe later, we use K = 2 for our initial tasks and K = N for the final task. For each action, we pick a random start and end time from within the 30-frame slot. To further add to the diagnostic ability of this dataset, we render an additional set of videos with camera motion, with all other aspects of the data similarly distributed as the static camera case. For this, the camera is always kept pointed towards the global origin, and moved randomly between a predefined set of 3D coordinates. These coordinates include X and Y ∈ {−10, 10} and Z ∈ {8, 10, 12}. Every 30 frames, we randomly pick a new location from the Cartesian product of X,Y, Z, and move the camera to that location over the next 30 frames. However, we do constrain the camera to not change both X and Y coordinates at the same time, as that causes a jarring viewpoint shift as the camera passes over the (0, 0, Z) point. Also, we ensure all the camera motion videos start from the same viewpoint, to make it easy to register the axes locations for localization task. Spatiotemporal compositions: We wish to label our animations with the atomic actions present, as well as their compositions. Atomic actions have a well-defined spatiotemporal footprint, and so we can define composites using spatial relations (“a cylinder is rotating behind a sliding red ball”), similar to CLEVR. Unique to CATER is the ability to designate temporal relationships (“a cylinder rotates before a ball is picked-and-placed”). Because atomic actions occupy a well-defined temporal extent, we need temporal logic that reasons about relations between intervals rather than instantaneous events. While the latter can be dealt with timestamps, the former can be described with Allen’s interval algebra with thirteen basic relations (Figure 3) along with composition operations. For simplicity, we group those into three broad relations. However, our dataset contains examples of all such interval relations and can be used to explore fine-grained temporal relationships. 3.1 TASKS DEFINED ON THE DATASET Given this CATER universe with videos, ground truth objects and their actions at any time point, we can define arbitrarily complex tasks for a video understanding system. Our choice of tasks is informed by two of the main goals of video understanding: 1) Recognizing the states of the actor, including spatiotemporal compositions of those atomic actions. For example, a spatiotemporal composition of atomic human body movements can be described as an exercise or dance routine. And 2) Recognizing the effect of those actions on the state of the world. For example, an action involving picking and placing a cup would change the position of the cup and any constituent objects contained within it, and understanding this change in the world state would implicitly require understanding the action itself. Given these two goals, we define three tasks on CATER. Each has progressively higher complexity, and tests for a higher level reasoning ability. To be consistent with existing popular benchmarks (Soomro et al., 2012; Kuehne et al., 2011; Kay et al., 2017; Sigurdsson et al., 2016), we stick to standard single or multi-label classification setup, with standard evaluation metrics, as described next. For each of these tasks, we start by rendering 5500 total videos, to be comparable in size with existing popular benchmarks (Kuehne et al., 2011). Since tasks 1 and 2 (defined next) explicitly require recognizing individual actions, we use K = 2 for the videos rendered to keep the number of actions happening in any given video small. For task 3, we set K = N as the task is to recognize the end effect of actions, and not necessarily the actions themselves. We split the data randomly in 70:30 ratio into a training and test set. We similarly render a same size dataset with camera motion, and define tasks and splits in the same way as for the static camera. With the code release we also provide a further split of train set into a validation set (80:20). While we focus on the following tasks in this paper, note that the data is amenable to many other tasks, for instance (Malinowski et al., 2020) uses CATER for video reconstruction. Task 1: Atomic action recognition. This first task on CATER is primarily designed as a simple debugging task, which should be easy for contemporary models to solve. Given the combinations of object shapes and actions afforded by them, we define 14 classes such as ‘slide(cone)’, ‘rotate(cube)’ and so on. Since each video can have multiple actions, we define it as a multi-label classification problem. The task is to produce 14 probability values, denoting the likelihood of that action happening in the video. The performance is evaluated using average precision per-class. Final dataset-level performance is computed by mean over all classes, to get mean average precision (mAP). This is a popular metric used in other multi-label action classification datasets (Sigurdsson et al., 2016; Gu et al., 2018). Task 2: Compositional action recognition. While recognizing individual objects and motions is important, it is clearly not enough. Real world actions tend to be composite in nature, and humans have no difficulty recognizing them in whole or in parts. To that end, we construct a compositional action recognition task through spatiotemporal composition of the basic actions used in Task 1. For simplicity, we limit composites to pairs of 14 atomic actions, where the temporal relation is grouped into broad categories of ‘before’, ‘during’ and ‘after’ as shown in Figure 3. Combining all possible atomic actions with the three possible relations, we get a total of 14 × 14 × 3 = 588 classes, and removing duplicates (such as ‘X after Y’ is a duplicate of ‘Y before X’), leaves 301 classes. Similar to task 1, multiple compositions can be active in any given video, so we set it up as a multi-label classification problem, evaluated using mAP. If certain compositions never occur in the dataset, those are ignored for the final evaluation. Task 3: Snitch localization. The final, and the flagship task in CATER, tests models’ ability to recognize the effect of actions on the environment. Just as in the case of cup-and-ball trick, the ability of a model to recognize location of objects after some activity can be thought of as an implicit evaluation of its ability to understand the activity itself. The task now is to predict the location of the special object introduced above, the Snitch. While it may seem trivial to localize it from the last frame, it may not always be possible to do that due to occlusions and recursive containments. The snitch can be contained by other objects (cones), which can further be contained by other larger cones. All objects move together until ‘uncontained’, so the final location of the snitch would require long range reasoning about these interactions. For simplicity, we pose this as a classification problem by quantizing the 6 × 6 grid into 36 cells and asking which cell the snitch is in, at the end of the video. We ablate the grid size in experiments. Since the snitch can only be at a single location at the end of the video, we setup the problem as a single label classification, and evaluate it using standard percentage accuracy metrics such as top-1 and top-5 accuracy. However, one issue with this metric is that is would penalize predictions where the snitch is slightly over the cell boundaries. While the top-5 metric is somewhat robust to this issue, we also report mean L1 distance of predicted grid cell from the ground truth, as a metric that is congnizant of the grid structure in this task. Hence, it would penalize confusion between adjacent cells less than those between distant cells. The data is also amenable to a purely regression-style evaluation, though we leave that to future work. 4 EXPERIMENTS We now experiment with CATER using recently introduced state of the art video understanding and temporal reasoning models (Carreira & Zisserman, 2017; Wang et al., 2018; 2016b; Hochreiter & Schmidhuber, 1997). I3D (Carreira & Zisserman, 2017), called R3D when implemented using a ResNet (He et al., 2016a) in (Wang et al., 2018), brings the best of image models to video domain by inflating it into 3D for spatiotemporal feature learning. Non-local networks (Wang et al., 2018) further build upon that to add a spatiotemporal interaction layer that gives strong improvements and out-performs many multi-stream architectures (that use audio, flow etc) on Kinetics and Charades benchmarks. For our main task, snitch localization, we also experiment with a 2D-conv based approach, Temporal Segment Networks (TSN) (Wang et al., 2016b), which another top performing method on standard benchmarks (Kay et al., 2017). This approaches uses both RGB and flow modalities. All these architectures learn a model for individual frames or short clips, and at test time aggregate the predictions by averaging over those clips. While simple averaging works well enough on most recent datasets (Kay et al., 2017; Soomro et al., 2012; Kuehne et al., 2011), it clearly loses all temporal information and may not be well suited to our set of tasks. Hence, we also experiment with a learned aggregation strategy: specifically using an LSTM (Hochreiter & Schmidhuber, 1997) for aggregation, which is the tool of choice for temporal modelling in various domains including language and audio. We use a common LSTM implementation for aggregating either (Wang et al., 2016b) or (Wang et al., 2018) that operates on the last layer features (before logits). We extract these features for subclips from train and test videos, and train a 2-layer LSTM with 512 hidden units in each layer on the train subclips. The LSTM produces an output at each clip it sees, and we enforce a classification loss at the end, once the model has seen all the clips. At test time we take the prediction from the last clip as the aggregated prediction. We report the LSTM performance averaged over three runs to control for random variation. It is worth noting that LSTMs have been previously used for action recognition in videos (Donahue et al., 2015; Karpathy et al., 2014), however with only marginal success over simple average pooling. As we show later, LSTMs actually perform significantly better on CATER, indicating the importance of temporal reasoning. For task 3, we also experiment with a state-of-the-art visual tracking method (Zhu et al., 2018). We start by using the GT information of the starting position of snitch, and project it to screen coordinates using the render camera parameters. We defined a fixed size box around it to initialize the tracker, and run it until the end of the video. At the last frame, we project the center point of the tracked box to the 3D plane (and eventually, the class label) by using a homography transformation between the image and the 3D plane. This provides a more traditional, symbolic reasoning baseline for our dataset, and as we show in results, is also not enough to solve the task. Finally, we do note that many other video models have been proposed in literature involving 2.5D convolutions (Tran et al., 2018; Xie et al., 2017), VLAD-style aggregation (Girdhar et al., 2017; Miech et al., 2017) and other multi-modal architectures (Wang et al., 2016a; Bian et al., 2017). We focus on the most popular and best performing models, and leave a more comprehensive study to future work. A random baseline is also provided for all tasks, computed as the average performance of random scores passed into the evaluation functions. Implementation details for all baselines are provided in the supplementary and code will be released. Task 1: Atomic action recognition: Table 2 (a) shows the performance of R3D with and without the non-local (NL) blocks, using different number of frames in the clips. We use a fixed sampling rate of 8, but experiment with different clip sizes. Adding more frames helps significantly in this case. Given the ease of the task, R3D obtains fairly strong performance for static camera, but not so much for moving camera, suggesting potential future work in building models agnostic to camera motion. Table 2: Performance on the (a) 14-way atomic actions recognition, (b) 301-way compositional action recognition, and (c) 36-way localization task, for different methods. (a) Task 1 (Atomic) Camera Model NL #frames mAP - Random - - 56.2 Static R3D 8 89.0 Static R3D X 8 88.8 Static R3D 32 98.8 Static R3D X 32 98.9 Moving R3D 8 82.4 Moving R3D X 8 82.7 Moving R3D 32 90.5 Moving R3D X 32 90.2 (b) Task 2 (Compositional) Camera Model NL #frames mAP Avg LSTM - Random - - 19.5 19.5 Static R3D 8 39.5 52.1 Static R3D 32 44.2 53.4 Static R3D X 32 45.9 53.1 Static R3D 64 43.7 43.5 Moving R3D 32 40.9 43.2 Moving R3D X 32 41.1 43.5 (c) Task 3 (Localization) Camera Model #frames SR Avg LSTM Top 1 Top 5 L1 Top 1 Top 5 L1 - Random - - 2.8 13.8 3.9 2.8 13.8 3.9 Static Tracking - - 33.9 - 2.4 33.9 - 2.4 Static TSN (RGB) 1 - 7.4 27.0 3.9 15.3 50.0 3.0 Static TSN (RGB) 3 - 14.1 38.5 3.2 25.6 67.2 2.6 Static TSN (Flow) 1 - 6.2 21.7 4.4 7.3 26.9 4.1 Static TSN (Flow) 3 - 9.6 32.2 3.7 14.0 43.5 3.2 Static R3D 8 8 24.0 54.8 2.7 34.2 64.6 1.8 Static R3D 16 8 26.2 56.3 2.6 24.2 48.9 2.5 Static R3D 32 8 28.8 68.7 2.6 45.5 67.7 1.6 Static R3D 64 8 57.4 78.4 1.4 60.2 81.8 1.2 Static R3D + NL 32 8 26.7 68.9 2.6 46.2 69.9 1.5 Moving R3D 32 8 23.4 61.1 2.5 28.6 63.3 1.7 Moving R3D + NL 32 8 27.5 68.8 2.4 38.6 70.2 1.5 Models Kinetics UCF-101 HMDB-51 CATER 1 frame (RGB) (Donahue et al., 2015) - 67.4 - 7.4 LSTM (RGB) (Donahue et al., 2015) - 68.2 - 15.3 TSN (RGB) (Wang et al., 2016b) 72.5 93.2 51.0 14.1 TSN (Flow) (Wang et al., 2016b) 62.8 95.3 64.2 9.6 2S I3D (Carreira & Zisserman, 2017) 75.7 98.0 80.7 - 2S R(2+1)D (Tran et al., 2018) 75.4 97.3 78.7 - R3D(+NL) (Wang et al., 2018) 77.7 - - 57.4 Table 3: Long term reasoning. Comparing the best reported performance of standard models on existing datasets and CATER (task 3). Unlike previous benchmarks, (1) temporal modeling using LSTM helps and (2) local temporal cues (flow) are not effective by itself on CATER. 2S here refers to ‘Two Stream’. TSN performance from (Xiong, 2017; 2016). Task 2: Compositional action recognition: Next we experiment with the compositional action recognition task. The training and testing is done in the same way as Task 1, except this predicts confidence over 301 classes. As evident from Table 2 (b), this task is harder for the existing models, presumably as recognizing objects and simple motions would no longer solve it, and models need to reason about spatiotemporal compositions as well. It is interesting to note that non-local blocks now add to the final performance, which was not the case for Task 1, suggesting modeling spatiotemporal relations is more useful for this task. LSTM aggregation also helps quite a bit as the model can learn to reason about long-range temporal compositions. As expected, moving camera makes the problem harder. Task 3: Snitch localization: Finally we turn to the localization task. Since this is setup as a single label classification, we use softmax cross entropy loss to train and classification accuracy for evaluation. For tracking, no training is required as we use the pre-trained model from (Zhu et al., 2018) and run it on the validation videos. Table 2 (c) shows the performance of various methods, evaluated at different clip lengths and frame rates. For this task we also experiment with TSN (Wang et al., 2016b), though it ends up performing significantly worse than R3D. Note that this contrasts with standard video datasets (Kay et al., 2017), where it tends to perform similar to R3D (Xiong, 2017). We also experiment with the flow modality and observe it obtains even lower performance, which is expected as this task requires recognizing objects which is much harder from flow. Again, note that flow models obtain similar if not better performance as RGB on standard datasets (Kay et al., 2017; Xiong, 2017). We also note higher performance on considering longer clips with higher sample rate. This is not surprising as a task like this would require long term temporal reasoning, which is aided by looking at longer videos. This is also reinforced by the observation that using LSTM for aggregation leads to a major improvement in performance for most models. Finally, the tracking approach also only solves about a third of the videos, as even the state of the art tracker ends up drifting due to occlusions and contain operations. In Table 4, we ablate the performance with respect to the underlying grid granularity, with 6 × 6 being the default used in Table 2 (c). We observe tracking is a stronger baseline as the localization task gets more fine-grained. Finally in Table 3 we compare performance of some of these models on existing benchmarks and CATER. Analysis: Having close control over the dataset generation process enables us to perform diagnostics impossible with any previous dataset. We use the R3D+NL, 32-frame, static camera model with average (or LSTM, when specified) pooling for all following visualizations. We first analyze aggregate performance of our model over multiple bins in Figure 4, and observe some interesting phenomena. (a) Performance drops if the snitch keeps moving until the end. This makes sense: if the snitch reaches its final position early in the video, models have a lot more frames to reinforce their hypothesis of its final location. Between LSTM and avg-pooling, LSTM is much better able to handle the motion of the snitch, as expected. Perhaps not surprisingly, the tracker is much less effected by snitch movement, indicating the power of such classic computational pipelines for longterm spatiotemporal understanding. (b) Drops if the snitch is contained in the end. Being contained in the final frame makes the snitch harder to spot and track (just like the cups and ball game!), hence the lower performance. Next, we visualize the videos that our models gets right or wrong. We sort all validation videos based on the softmax confidence score for the ground truth class, and visualize the top and bottom six in Figure 5 (full video in supplementary). We find that the easiest videos for avg-pooled model tend to be ones with little snitch motion, i.e. the object stays at the position it starts off in. On the other hand, the LSTM-aggregated model fares better with snitch motion, as long as it happens early in the video. The hardest videos for both tend to be ones with sudden motion of the snitch towards the end of the video, as shown by the bright golden trail denoting the motion towards the end (better viewed in supplementary video). These observations are supported by the quantitative plots in Figure 4 (a) and (c). 5 CONCLUSION We use CATER to analyze several leading network designs on hard spatiotemporal tasks. We find most models struggle on our proposed dataset, especially on the snitch localization task which requires long term reasoning. Interestingly, average pooling clip predictions or short temporal cues (optical flow) perform rather poorly on CATER, unlike most previous benchmarks. Such temporal reasoning challenges are common in the real world (eg. Fig. 1 (a)), and solving those would be the cornerstone of the next improvements in machine video understanding. We believe CATER would serve as an intermediary in building systems that will reason over space and time to understand actions. That said, CATER is, by no means, a complete solution to the video understanding problem. Like any other synthetic or simulated dataset, it should be considered in addition to real world benchmarks. While we have focused on classification tasks for simplicity, our fully-annotated dataset can be used for much richer parsing tasks such as spacetime action localization. One of our findings is that while high-level semantic tasks such as activity recognition may be addressable with current architectures given a richly labeled dataset, “mid-level” tasks such as tracking still pose tremendous challenges, particularly under long-term occlusions and containment. We believe addressing such challenges will enable broader temporal reasoning tasks that capture intentions, goals, and causal behavior. ACKNOWLEDGMENTS Authors would like to thank Ishan Misra for many helpful discussions and help with systems. This research is based upon work supported in part by NSF Grant 1618903. B TRAIN/VAL DISTRIBUTIONS Figure 6 shows the data distribution over classes for each of the tasks. C VIDEO VISUALIZATION The supplementary video3 visualizes: 1. Sample videos from the dataset (with and without camera motion). 2. Easiest and hardest videos for task 3. We rank all validation videos for task 3 based on their softmax probability for the correct class. We show the top-6 (easiest) and bottom-6 (hardest) for 32-frame stride-8 non-local + LSTM model. We observe the hardest ones involve sudden motion towards the end of the video. This reinforces the observation made in Figure 5(a) in the main paper, that videos where snitch keeps moving till the end are the hardest. If the snitch stops moving earlier, models have more evidence for the final location of the snitch, making the task easier. 3. Tracking results. We visualize the results of tracking the snitch over the video as one approach to solving task 3. We observe that while it works in the simple scenarios, it fails when there is a lot of occlusion or complex contain operations. 4. Model bottom-up attention. We visualize where does the model look for Task 3. As suggested in (Malinowski et al., 2018), we visualize the l2-norm of the last layer features from our 32-frame stride-8 non-local model on the center video crop. The deep red color denotes large norm value at that spatiotemporal location. We find that the model automatically learns to focus on the snitch towards the end of clips, which makes sense as that is the most important object for solving the localization task. 3https://rohitgirdhar.github.io/CATER/assets/suppl/video.mp4
1. What is the focus of the paper, and what are the research questions or problems that it addresses? 2. What are the strengths of the proposed dataset, CATER, and how does it contribute to the field of video understanding? 3. How do the authors evaluate the performance of various models on the three tasks in CATER, and what are the findings? 4. What are some minor comments, questions, or editing notes that could improve the clarity and impact of the paper?
Review
Review The paper introduces CATER: a synthetically generated dataset for video understanding tasks. The dataset is an extension of CLEVR using simple motions of primitive 3D objects to produce videos of primitive actions (e.g. pick and place a cube), compositional actions (e.g. "cone is rotated during the sliding of the sphere"), and finally a 3D object localization tasks (i.e. where is the "snitch" object at the end of the video). The construction of the dataset focuses on demonstrating that compositional action classification and long-term temporal reasoning for action understanding and localization in videos are largely unsolved problems, and that frame aggregation-based methods on real video data in prior work datasets, have found relative success not because the tasks are easy but because of dataset bias issues. A variety of models from recent work are evaluated on the three proposed tasks, demonstrating the validity of the above motivation for the construction of the dataset. The primitive action classification task is "solved" by nearly all methods and only serves for debugging purposes. The compositional action classification task is harder and shows that incorporating LSTMs for temporal reasoning leads to non-trivial performance improvements over frame averaging. Finally, the localization task is challenging, especially when camera motion is introduced, with much space for improvement left for future work. I am positive with respect to acceptance of this paper. It is a well-argued, thoughtful dataset contribution that sets up a reasonable video understanding dataset. The authors recognize that since the dataset is synthetically generated it is not necessarily predictive of how methods would perform with real-world data, but still it can serve a useful and complementary role similar to the one CLEVR has served in image understanding. I have a few minor comments / questions / editing notes that would be good to address: - The random baseline isn't described in the main text, it would be good to briefly mention it (this will also help to clarify why the value is particularly high for tasks 1 and 2) - The grid resolution ablation results presented in the supplement are actually quite important -- they demonstrate that with a small increase in granularity of the grid the traditional tracking methods begin to be the best performers. As this direction (of increased resolution to make the problem less artificial) is likely to be important, a brief discussion of this finding from the main paper text would be appropriate - p3 resiliance -> resilience - p4 objects is moved -> object is moved - p6 actions itself -> actions themselves; builds upon -> build upon - p7 looses all -> loses all; suited our -> suited to our; render's camera parameters -> render camera parameters; to solve it -> to solve the problem - p8 (Xiong, b;a) and (Xiong, b) -> these references are missing the year; models needs to -> models need to - p9 phenomenon -> phenomena; the the videos -> the videos; these observation -> these observations; of next -> of the next; in real world -> in the real world
ICLR
Title CATER: A diagnostic dataset for Compositional Actions & TEmporal Reasoning Abstract Computer vision has undergone a dramatic revolution in performance, driven in large part through deep features trained on large-scale supervised datasets. However, much of these improvements have focused on static image analysis; video understanding has seen rather modest improvements. Even though new datasets and spatiotemporal models have been proposed, simple frame-by-frame classification methods often still remain competitive. We posit that current video datasets are plagued with implicit biases over scene and object structure that can dwarf variations in temporal structure. In this work, we build a video dataset with fully observable and controllable object and scene bias, and which truly requires spatiotemporal understanding in order to be solved. Our dataset, named CATER, is rendered synthetically using a library of standard 3D objects, and tests the ability to recognize compositions of object movements that require long-term reasoning. In addition to being a challenging dataset, CATER also provides a plethora of diagnostic tools to analyze modern spatiotemporal video architectures by being completely observable and controllable. Using CATER, we provide insights into some of the most recent state of the art deep video architectures. 1 INTRODUCTION While deep features have revolutionized static image analysis, video descriptors have struggled to outperform classic hand-crafted descriptors (Wang & Schmid, 2013). Though recent works have shown improvements by merging image and video models by inflating 2D models to 3D (Carreira & Zisserman, 2017; Feichtenhofer et al., 2016), simpler 2D models (Wang et al., 2016b) still routinely appear among top performers in video benchmarks such as the Kinetics Challenge at CVPR’17. This raises the natural question: are videos trivially understandable by simply averaging the predictions over a sampled set of frames? At some level, the answer must be no. Reasoning about high-level cognitive concepts such as intentions, goals, and causal relations requires reasoning over long-term temporal structure and order (Shoham, 1987; Bobick, 1997). Consider, for example, the movie clip in Fig. 1 (a), where an actor leaves the table, grabs a firearm from another room, and returns. Even though no gun is visible in the final frames, an observer can easily infer that the actor is surreptitiously carrying the gun. Needless to say, any single frame from the video seems incapable of supporting that inference, and one needs to reason over space and time in order to reach that conclusion. As a simpler instance of the problem, consider the cup-and-balls magic routine1, or the gamblingbased shell game2, as shown in Fig. 1 (b). In these games, an operator puts a target object (ball) under one of multiple container objects (cups), and moves them about, possibly revealing the target at various times and recursively containing cups within other cups. The task at the end is to tell which of the cups is covering the ball. Even in its simplest instantiation, one can expect any human or computer system that solves this task to require the ability to model state of the world over long temporal horizons, reason about occlusion, understand the spatiotemporal implications of containment, etc. An important aspect of both our motivating examples is the adversarial nature of the task, ∗Now at Facebook AI Research 1https://en.wikipedia.org/wiki/Cups_and_balls 2https://en.wikipedia.org/wiki/Shell_game where the operator in control is trying to make the observer fail. Needless to say, a frame by frame prediction model would be incapable of solving such tasks. Given these motivating examples, why don’t spatiotemporal models dramatically outperform their static counterparts for video understanding? We posit that this is due to limitations of existing video benchmarks. Even though video datasets have evolved from the small regime with tens of labels (Soomro et al., 2012; Kuehne et al., 2011; Schuldt et al., 2004) to large with hundreds of labels (Sigurdsson et al., 2016; Kay et al., 2017), tasks have remained highly correlated to the scene and object context. For example, it is trivial to recognize a swimming action given a swimming pool in the background (He et al., 2016b). This is further reinforced by the fact that state of the art pose-based action recognition models (Yan et al., 2018) are outperformed by simpler frame-level models (Wang et al., 2016b) on the Kinetics (Kay et al., 2017) benchmark, with a difference of nearly 45% in accuracy! Sigurdsson et al. also found similar results for their Charades (Sigurdsson et al., 2016) benchmark, where adding ground truth object information gave the largest boosts to action recognition performance (Sigurdsson et al., 2017). In this work, we take an alternate approach to developing a video understanding dataset. Inspired by the recent CLEVR dataset (Johnson et al., 2017) (that explores spatial reasoning in tabletop scenes) and inspired by the adversarial parlor games above (that require temporal reasoning), we introduce CATER, a diagnostic dataset for Compositional Actions and TEmporal Reasoning in dynamic tabletop scenes. We define three tasks on the dataset, each with an increasingly higher level of complexity, but set up as classification problems in order to be comparable to existing benchmarks for easy transfer of existing models and approaches. Specifically, we consider primitive action recognition, compositional action recognition, and adversarial target tracking under occlusion and containment. However, note that this does not limit the usability of our dataset to these tasks, and we provide full metadata with the rendered videos that can be used for more complex, structured prediction tasks like detection, tracking, forecasting, and so on. Our dataset does not model an operator (or hand) moving the tabletop objects, though this could be simulated as well in future variants, as in (Rogez et al., 2015). Being synthetic, CATER can easily be scaled up in size and complexity. It also allows for detailed model diagnostics by controlling various dataset generation parameters. We use CATER to benchmark state-of-the-art video understanding models (Wang et al., 2018; 2016b; Hochreiter & Schmidhuber, 1997), and show even the best models struggle on our dataset. We also uncover some insights into the behavior of these models by changing parameters such as the temporal duration of an occlusion, the degree of camera motion, etc., which are difficult to both tune and label in real-world video data. 2 RELATED WORK Spatiotemporal networks: Video understanding for action recognition has evolved from iconic hand-designed models (Wang & Schmid, 2013; Laptev, 2005; Wang et al., 2011) to sophisticated Dataset Size Len Task #cls TO STR LTR CSB UCF101 (Soomro et al., 2012) 13K 7s cls 101 7 7 7 7 HMDB51 (Kuehne et al., 2011) 5K 4s cls 51 7 7 7 7 Kinetics (Kay et al., 2017) 300K 10s cls 400 7 3 7 7 AVA (Gu et al., 2018) 430 15m det 80 7 3 7 7 VLOGs (Fouhey et al., 2018) 114K 10s cls 30 7 3 7 7 DAHLIA (Vaquette et al., 2017) 51 39m det 7 3 3 3 7 TACoS (Regneri et al., 2013) 127 6m align - 3 3 3 7 DiDeMo (Anne Hendricks et al., 2017) 10K 30s align - 3 3 3 7 Charades (Sigurdsson et al., 2016) 10K 30s det 157 3 3 7 7 Something Something (Goyal et al., 2017) 108K 4s cls 174 3 3 7 3 Diving48 (Li et al., 2018) 18K 5s cls 48 3 3 7 3 Cooking (Rohrbach et al., 2012a) 44 3-41m cls 218 3 3 7 3 IKEA (Toyer et al., 2017) 101 2-4m gen - 3 3 3 3 Composite (Rohrbach et al., 2012b) 212 1-23m cls 44 3 3 3 3 TFGIF-QA (Jang et al., 2017) 72K 3s qa - 3 3 7 7 MovieQA (Tapaswi et al., 2016) 400 200s qa - 3 3 3 7 Robot Pushing (Finn et al., 2016) 57K 1s gen - 3 3 7 3 SVQA (Song et al., 2018) 12K 4s qa - 3 3 7 3 Moving MNIST (Srivastava et al., 2015) - 2s gen - 3 3 7 3 Flash MNIST (Long et al., 2018) 100K 2s cls 1024 7 3 7 3 CATER (ours) 5.5K 10s cls 36-301 3 3 3 3 spatiotemporal deep networks (Carreira & Zisserman, 2017; Simonyan & Zisserman, 2014; Girdhar et al., 2017; Wang et al., 2018; Xie et al., 2017; Tran et al., 2018; 2015). While similar developments in the image domain have lead to large improvements on tasks like classification (Szegedy et al., 2016; He et al., 2016a; Huang et al., 2017) and localization (He et al., 2017; Papandreou et al., 2017), video models have struggled to out-perform previous hand-crafted descriptors (Wang & Schmid, 2013). Even within the set of deep video architectures, models capable of temporal modeling, such as RNNs (Karpathy et al., 2014) and 3D convolutions (Tran et al., 2015; Varol et al., 2017a) have not shown significantly better performance than much simpler, per-frame prediction models, such as variants of two-stream architectures (Wang et al., 2016b; Simonyan & Zisserman, 2014). Though some recent works have shown improvements by merging image and video models by inflating 2D models to 3D (Carreira & Zisserman, 2017; Feichtenhofer et al., 2016), simple 2D models (Wang et al., 2016b) were still among the top performers in the Kinetics Challenge at CVPR’17. Video action understanding datasets: There has been significant effort put forth to collecting video benchmarks. One line of attack employs human actors to perform scripted actions. This is typically done in controlled environments (Schuldt et al., 2004; Shahroudy et al., 2016; Ionescu et al., 2014), but recent work has pursued online crowd sourcing (Goyal et al., 2017; Sigurdsson et al., 2016). Another direction collects videos from movies and online sharing platforms. Many popular video benchmarks follow this route for diverse, in-the-wild videos, such as UCF-101 (Soomro et al., 2012), HMDB-51 (Kuehne et al., 2011) and more recently Kinetics (Kay et al., 2017) and VLOGs (Fouhey et al., 2018). As discussed earlier, such datasets struggle with the strong bias of actions with scenes and objects. Our underlying thesis is that the field of video understanding is hampered by such biases because they favor image-based baselines. While some recent work (Goyal et al., 2017; Li et al., 2018) attempts to control for this bias, it still remains a challenge for long-term reasoning tasks. One might argue that since such biases are common in the visual world, video benchmarks should reflect them. We take the view that a diverse set of benchmarks are needed to enable comprehensive diagnostics and validation of the state-of-affairs in video understanding. Table 1 shows that CATER fills a missing gap in the benchmark landscape, most notably because of its size/video length, label distribution, relative resilience to object and scene bias, and diagnostic abilities. Synthetic data in computer vision: Our work, being synthetically generated, is also closely related to other works in using synthetic data for computer vision applications. There has been a large body of work in this direction, with the major focus on using synthetic training data for real world applications. This includes semantic scene understanding (Dosovitskiy et al., 2017; Shah et al., 2018; Richter et al., 2017), 3D scene understanding (Girdhar et al., 2016; Su et al., 2015; Wu et al., 2016; Song et al., 2017), human understanding (Varol et al., 2017b; De Souza et al., 2017), optical flow (Butler et al., 2012; Mayer et al., 2016) and navigation, RL or embodied learning (Wu et al., 2018; Kolve et al., 2017; Kempka et al., 2016; Mnih et al., 2013). Our work, on the other hand, attempts to develop a benchmark for video based action understanding. Similar attempts have been made for scene understanding through abstract scenes (Zitnick et al., 2016), with more recently focusing on building a complex reasoning benchmark, CLEVR (Johnson et al., 2017). In the video domain, benchmarks such as Flash-MNIST (Long et al., 2018), Moving MNIST (Srivastava et al., 2015) and SVQA (Song et al., 2018) have been proposed. Concurrent to us, CLEVRER (Yi et al., 2020), PHYRE (Bakhtin et al., 2019), COPHY (Baradel et al., 2020) and IntPhys (Riochet et al., 2018) benchmarks have been proposed with a focus on causal physical reasoning through QA, RL, prediction and ranking interfaces respectively. On the other hand, CATER focuses on spatiotemporal video reasoning tasks building upon CLEVR, with a simple classification interface, making it easily amenable for existing video understanding systems. Object tracking: Detecting and tracking objects has typically been used as an initial representation for long-term video and activity understanding (Shet et al., 2005; Hongeng et al., 2004; Lavee et al., 2009). Extensions include adversarial tracking, where the objects are designed to be hidden from plain view. It has typically been used for tasks such as determining if humans are carrying an object (Dondera et al., 2013; Ferrando et al., 2006) or abandoned / exchanging objects (Tian et al., 2011; Li et al., 2006). We embrace this direction of work and include state-of-the-art deep trackers (Zhu et al., 2018) in our benchmark evaluation. 3 THE CATER DATASET CATER provides a video understanding dataset that requires long term temporal reasoning to be solved. Additionally, it provides diagnostic tools that can evaluate video models in specific scenarios, such as with or without camera motion, with varying number of objects and so on. This control over the dataset parameters is achieved by synthetically rendering the data. These videos come with a ground truth structure that can be used to design various different video understanding tasks, including but not limited to object localization and spatiotemporal action composition. Unlike existing video understanding benchmarks, this dataset is free of object or scene bias, as the same set of simple objects are used to render the videos. Fig. 2 describes the dataset and the associated tasks. We provide sample videos from the dataset in the supplementary video. Objects: The CATER universe is built upon CLEVR (Johnson et al., 2017), inheriting most of the standard object shapes, sizes, colors and materials present in it. This includes three object shapes (cube, sphere, cylinder), in three sizes (small, medium, large), two materials (shiny metal and matte rubber) and eight colors, as well as a large “table” plane on which all objects are placed. In addition to these objects, we add two new object shapes: inverted cones and a special object called a ‘snitch’. Cones also come in the same set of sizes, materials and colors. The ‘snitch’ is a special object shaped like three intertwined toruses in metallic gold color. Actions: We define four atomic actions: ‘rotate’, ‘pick-place’, ‘slide’ and ‘contain’; a subset of which is afforded by each object. The ‘rotate’ action means that the object rotates by 90°about its Y (or horizontal) axis, and is afforded by cubes, cylinders and the snitch. The ‘pick-place’ action means the object is picked up into the air along the Y axis, moved to a new position, and placed down. This is afforded by all objects. The ‘slide’ action means the object is moved to a new location by sliding along the bottom surface, and is also afforded by all objects. Finally, ‘contain’ is a special operation, only afforded by the cones, in which a cone is pick-placed on top of another object, which may be a sphere, a snitch or even a smaller cone. This allows for recursive containment, as a cone can contain a smaller cone that contains another object. Once a cone ‘contains’ an object, it is constrained to only ‘slide’ actions and effectively slides all objects contained within the cone. This holds until the top-most cone is pick-placed to another location, effectively ending the containment for that top-most cone. Animation process: We start with an initial setup similar to CLEVR. A random number (N ) of objects with random parameters are spawned at random locations at the beginning of the video. They exist on a 6× 6 portion of a 2D plane with the global origin in the center. In addition to the random objects, we ensure that every video has a snitch and a cone. For the purposes of this work, we render 300-frame 320x240px videos, at 24 FPS, making it comparable to standard benchmarks (Soomro et al., 2012; Kuehne et al., 2011; Kay et al., 2017). We split the video into 30-frame slots, and each action is contained within these slots. At the beginning of each slot, we iterate through up to K objects in a random order and attempt to add an action afforded by that object one by one without colliding with another object. As we describe later, we use K = 2 for our initial tasks and K = N for the final task. For each action, we pick a random start and end time from within the 30-frame slot. To further add to the diagnostic ability of this dataset, we render an additional set of videos with camera motion, with all other aspects of the data similarly distributed as the static camera case. For this, the camera is always kept pointed towards the global origin, and moved randomly between a predefined set of 3D coordinates. These coordinates include X and Y ∈ {−10, 10} and Z ∈ {8, 10, 12}. Every 30 frames, we randomly pick a new location from the Cartesian product of X,Y, Z, and move the camera to that location over the next 30 frames. However, we do constrain the camera to not change both X and Y coordinates at the same time, as that causes a jarring viewpoint shift as the camera passes over the (0, 0, Z) point. Also, we ensure all the camera motion videos start from the same viewpoint, to make it easy to register the axes locations for localization task. Spatiotemporal compositions: We wish to label our animations with the atomic actions present, as well as their compositions. Atomic actions have a well-defined spatiotemporal footprint, and so we can define composites using spatial relations (“a cylinder is rotating behind a sliding red ball”), similar to CLEVR. Unique to CATER is the ability to designate temporal relationships (“a cylinder rotates before a ball is picked-and-placed”). Because atomic actions occupy a well-defined temporal extent, we need temporal logic that reasons about relations between intervals rather than instantaneous events. While the latter can be dealt with timestamps, the former can be described with Allen’s interval algebra with thirteen basic relations (Figure 3) along with composition operations. For simplicity, we group those into three broad relations. However, our dataset contains examples of all such interval relations and can be used to explore fine-grained temporal relationships. 3.1 TASKS DEFINED ON THE DATASET Given this CATER universe with videos, ground truth objects and their actions at any time point, we can define arbitrarily complex tasks for a video understanding system. Our choice of tasks is informed by two of the main goals of video understanding: 1) Recognizing the states of the actor, including spatiotemporal compositions of those atomic actions. For example, a spatiotemporal composition of atomic human body movements can be described as an exercise or dance routine. And 2) Recognizing the effect of those actions on the state of the world. For example, an action involving picking and placing a cup would change the position of the cup and any constituent objects contained within it, and understanding this change in the world state would implicitly require understanding the action itself. Given these two goals, we define three tasks on CATER. Each has progressively higher complexity, and tests for a higher level reasoning ability. To be consistent with existing popular benchmarks (Soomro et al., 2012; Kuehne et al., 2011; Kay et al., 2017; Sigurdsson et al., 2016), we stick to standard single or multi-label classification setup, with standard evaluation metrics, as described next. For each of these tasks, we start by rendering 5500 total videos, to be comparable in size with existing popular benchmarks (Kuehne et al., 2011). Since tasks 1 and 2 (defined next) explicitly require recognizing individual actions, we use K = 2 for the videos rendered to keep the number of actions happening in any given video small. For task 3, we set K = N as the task is to recognize the end effect of actions, and not necessarily the actions themselves. We split the data randomly in 70:30 ratio into a training and test set. We similarly render a same size dataset with camera motion, and define tasks and splits in the same way as for the static camera. With the code release we also provide a further split of train set into a validation set (80:20). While we focus on the following tasks in this paper, note that the data is amenable to many other tasks, for instance (Malinowski et al., 2020) uses CATER for video reconstruction. Task 1: Atomic action recognition. This first task on CATER is primarily designed as a simple debugging task, which should be easy for contemporary models to solve. Given the combinations of object shapes and actions afforded by them, we define 14 classes such as ‘slide(cone)’, ‘rotate(cube)’ and so on. Since each video can have multiple actions, we define it as a multi-label classification problem. The task is to produce 14 probability values, denoting the likelihood of that action happening in the video. The performance is evaluated using average precision per-class. Final dataset-level performance is computed by mean over all classes, to get mean average precision (mAP). This is a popular metric used in other multi-label action classification datasets (Sigurdsson et al., 2016; Gu et al., 2018). Task 2: Compositional action recognition. While recognizing individual objects and motions is important, it is clearly not enough. Real world actions tend to be composite in nature, and humans have no difficulty recognizing them in whole or in parts. To that end, we construct a compositional action recognition task through spatiotemporal composition of the basic actions used in Task 1. For simplicity, we limit composites to pairs of 14 atomic actions, where the temporal relation is grouped into broad categories of ‘before’, ‘during’ and ‘after’ as shown in Figure 3. Combining all possible atomic actions with the three possible relations, we get a total of 14 × 14 × 3 = 588 classes, and removing duplicates (such as ‘X after Y’ is a duplicate of ‘Y before X’), leaves 301 classes. Similar to task 1, multiple compositions can be active in any given video, so we set it up as a multi-label classification problem, evaluated using mAP. If certain compositions never occur in the dataset, those are ignored for the final evaluation. Task 3: Snitch localization. The final, and the flagship task in CATER, tests models’ ability to recognize the effect of actions on the environment. Just as in the case of cup-and-ball trick, the ability of a model to recognize location of objects after some activity can be thought of as an implicit evaluation of its ability to understand the activity itself. The task now is to predict the location of the special object introduced above, the Snitch. While it may seem trivial to localize it from the last frame, it may not always be possible to do that due to occlusions and recursive containments. The snitch can be contained by other objects (cones), which can further be contained by other larger cones. All objects move together until ‘uncontained’, so the final location of the snitch would require long range reasoning about these interactions. For simplicity, we pose this as a classification problem by quantizing the 6 × 6 grid into 36 cells and asking which cell the snitch is in, at the end of the video. We ablate the grid size in experiments. Since the snitch can only be at a single location at the end of the video, we setup the problem as a single label classification, and evaluate it using standard percentage accuracy metrics such as top-1 and top-5 accuracy. However, one issue with this metric is that is would penalize predictions where the snitch is slightly over the cell boundaries. While the top-5 metric is somewhat robust to this issue, we also report mean L1 distance of predicted grid cell from the ground truth, as a metric that is congnizant of the grid structure in this task. Hence, it would penalize confusion between adjacent cells less than those between distant cells. The data is also amenable to a purely regression-style evaluation, though we leave that to future work. 4 EXPERIMENTS We now experiment with CATER using recently introduced state of the art video understanding and temporal reasoning models (Carreira & Zisserman, 2017; Wang et al., 2018; 2016b; Hochreiter & Schmidhuber, 1997). I3D (Carreira & Zisserman, 2017), called R3D when implemented using a ResNet (He et al., 2016a) in (Wang et al., 2018), brings the best of image models to video domain by inflating it into 3D for spatiotemporal feature learning. Non-local networks (Wang et al., 2018) further build upon that to add a spatiotemporal interaction layer that gives strong improvements and out-performs many multi-stream architectures (that use audio, flow etc) on Kinetics and Charades benchmarks. For our main task, snitch localization, we also experiment with a 2D-conv based approach, Temporal Segment Networks (TSN) (Wang et al., 2016b), which another top performing method on standard benchmarks (Kay et al., 2017). This approaches uses both RGB and flow modalities. All these architectures learn a model for individual frames or short clips, and at test time aggregate the predictions by averaging over those clips. While simple averaging works well enough on most recent datasets (Kay et al., 2017; Soomro et al., 2012; Kuehne et al., 2011), it clearly loses all temporal information and may not be well suited to our set of tasks. Hence, we also experiment with a learned aggregation strategy: specifically using an LSTM (Hochreiter & Schmidhuber, 1997) for aggregation, which is the tool of choice for temporal modelling in various domains including language and audio. We use a common LSTM implementation for aggregating either (Wang et al., 2016b) or (Wang et al., 2018) that operates on the last layer features (before logits). We extract these features for subclips from train and test videos, and train a 2-layer LSTM with 512 hidden units in each layer on the train subclips. The LSTM produces an output at each clip it sees, and we enforce a classification loss at the end, once the model has seen all the clips. At test time we take the prediction from the last clip as the aggregated prediction. We report the LSTM performance averaged over three runs to control for random variation. It is worth noting that LSTMs have been previously used for action recognition in videos (Donahue et al., 2015; Karpathy et al., 2014), however with only marginal success over simple average pooling. As we show later, LSTMs actually perform significantly better on CATER, indicating the importance of temporal reasoning. For task 3, we also experiment with a state-of-the-art visual tracking method (Zhu et al., 2018). We start by using the GT information of the starting position of snitch, and project it to screen coordinates using the render camera parameters. We defined a fixed size box around it to initialize the tracker, and run it until the end of the video. At the last frame, we project the center point of the tracked box to the 3D plane (and eventually, the class label) by using a homography transformation between the image and the 3D plane. This provides a more traditional, symbolic reasoning baseline for our dataset, and as we show in results, is also not enough to solve the task. Finally, we do note that many other video models have been proposed in literature involving 2.5D convolutions (Tran et al., 2018; Xie et al., 2017), VLAD-style aggregation (Girdhar et al., 2017; Miech et al., 2017) and other multi-modal architectures (Wang et al., 2016a; Bian et al., 2017). We focus on the most popular and best performing models, and leave a more comprehensive study to future work. A random baseline is also provided for all tasks, computed as the average performance of random scores passed into the evaluation functions. Implementation details for all baselines are provided in the supplementary and code will be released. Task 1: Atomic action recognition: Table 2 (a) shows the performance of R3D with and without the non-local (NL) blocks, using different number of frames in the clips. We use a fixed sampling rate of 8, but experiment with different clip sizes. Adding more frames helps significantly in this case. Given the ease of the task, R3D obtains fairly strong performance for static camera, but not so much for moving camera, suggesting potential future work in building models agnostic to camera motion. Table 2: Performance on the (a) 14-way atomic actions recognition, (b) 301-way compositional action recognition, and (c) 36-way localization task, for different methods. (a) Task 1 (Atomic) Camera Model NL #frames mAP - Random - - 56.2 Static R3D 8 89.0 Static R3D X 8 88.8 Static R3D 32 98.8 Static R3D X 32 98.9 Moving R3D 8 82.4 Moving R3D X 8 82.7 Moving R3D 32 90.5 Moving R3D X 32 90.2 (b) Task 2 (Compositional) Camera Model NL #frames mAP Avg LSTM - Random - - 19.5 19.5 Static R3D 8 39.5 52.1 Static R3D 32 44.2 53.4 Static R3D X 32 45.9 53.1 Static R3D 64 43.7 43.5 Moving R3D 32 40.9 43.2 Moving R3D X 32 41.1 43.5 (c) Task 3 (Localization) Camera Model #frames SR Avg LSTM Top 1 Top 5 L1 Top 1 Top 5 L1 - Random - - 2.8 13.8 3.9 2.8 13.8 3.9 Static Tracking - - 33.9 - 2.4 33.9 - 2.4 Static TSN (RGB) 1 - 7.4 27.0 3.9 15.3 50.0 3.0 Static TSN (RGB) 3 - 14.1 38.5 3.2 25.6 67.2 2.6 Static TSN (Flow) 1 - 6.2 21.7 4.4 7.3 26.9 4.1 Static TSN (Flow) 3 - 9.6 32.2 3.7 14.0 43.5 3.2 Static R3D 8 8 24.0 54.8 2.7 34.2 64.6 1.8 Static R3D 16 8 26.2 56.3 2.6 24.2 48.9 2.5 Static R3D 32 8 28.8 68.7 2.6 45.5 67.7 1.6 Static R3D 64 8 57.4 78.4 1.4 60.2 81.8 1.2 Static R3D + NL 32 8 26.7 68.9 2.6 46.2 69.9 1.5 Moving R3D 32 8 23.4 61.1 2.5 28.6 63.3 1.7 Moving R3D + NL 32 8 27.5 68.8 2.4 38.6 70.2 1.5 Models Kinetics UCF-101 HMDB-51 CATER 1 frame (RGB) (Donahue et al., 2015) - 67.4 - 7.4 LSTM (RGB) (Donahue et al., 2015) - 68.2 - 15.3 TSN (RGB) (Wang et al., 2016b) 72.5 93.2 51.0 14.1 TSN (Flow) (Wang et al., 2016b) 62.8 95.3 64.2 9.6 2S I3D (Carreira & Zisserman, 2017) 75.7 98.0 80.7 - 2S R(2+1)D (Tran et al., 2018) 75.4 97.3 78.7 - R3D(+NL) (Wang et al., 2018) 77.7 - - 57.4 Table 3: Long term reasoning. Comparing the best reported performance of standard models on existing datasets and CATER (task 3). Unlike previous benchmarks, (1) temporal modeling using LSTM helps and (2) local temporal cues (flow) are not effective by itself on CATER. 2S here refers to ‘Two Stream’. TSN performance from (Xiong, 2017; 2016). Task 2: Compositional action recognition: Next we experiment with the compositional action recognition task. The training and testing is done in the same way as Task 1, except this predicts confidence over 301 classes. As evident from Table 2 (b), this task is harder for the existing models, presumably as recognizing objects and simple motions would no longer solve it, and models need to reason about spatiotemporal compositions as well. It is interesting to note that non-local blocks now add to the final performance, which was not the case for Task 1, suggesting modeling spatiotemporal relations is more useful for this task. LSTM aggregation also helps quite a bit as the model can learn to reason about long-range temporal compositions. As expected, moving camera makes the problem harder. Task 3: Snitch localization: Finally we turn to the localization task. Since this is setup as a single label classification, we use softmax cross entropy loss to train and classification accuracy for evaluation. For tracking, no training is required as we use the pre-trained model from (Zhu et al., 2018) and run it on the validation videos. Table 2 (c) shows the performance of various methods, evaluated at different clip lengths and frame rates. For this task we also experiment with TSN (Wang et al., 2016b), though it ends up performing significantly worse than R3D. Note that this contrasts with standard video datasets (Kay et al., 2017), where it tends to perform similar to R3D (Xiong, 2017). We also experiment with the flow modality and observe it obtains even lower performance, which is expected as this task requires recognizing objects which is much harder from flow. Again, note that flow models obtain similar if not better performance as RGB on standard datasets (Kay et al., 2017; Xiong, 2017). We also note higher performance on considering longer clips with higher sample rate. This is not surprising as a task like this would require long term temporal reasoning, which is aided by looking at longer videos. This is also reinforced by the observation that using LSTM for aggregation leads to a major improvement in performance for most models. Finally, the tracking approach also only solves about a third of the videos, as even the state of the art tracker ends up drifting due to occlusions and contain operations. In Table 4, we ablate the performance with respect to the underlying grid granularity, with 6 × 6 being the default used in Table 2 (c). We observe tracking is a stronger baseline as the localization task gets more fine-grained. Finally in Table 3 we compare performance of some of these models on existing benchmarks and CATER. Analysis: Having close control over the dataset generation process enables us to perform diagnostics impossible with any previous dataset. We use the R3D+NL, 32-frame, static camera model with average (or LSTM, when specified) pooling for all following visualizations. We first analyze aggregate performance of our model over multiple bins in Figure 4, and observe some interesting phenomena. (a) Performance drops if the snitch keeps moving until the end. This makes sense: if the snitch reaches its final position early in the video, models have a lot more frames to reinforce their hypothesis of its final location. Between LSTM and avg-pooling, LSTM is much better able to handle the motion of the snitch, as expected. Perhaps not surprisingly, the tracker is much less effected by snitch movement, indicating the power of such classic computational pipelines for longterm spatiotemporal understanding. (b) Drops if the snitch is contained in the end. Being contained in the final frame makes the snitch harder to spot and track (just like the cups and ball game!), hence the lower performance. Next, we visualize the videos that our models gets right or wrong. We sort all validation videos based on the softmax confidence score for the ground truth class, and visualize the top and bottom six in Figure 5 (full video in supplementary). We find that the easiest videos for avg-pooled model tend to be ones with little snitch motion, i.e. the object stays at the position it starts off in. On the other hand, the LSTM-aggregated model fares better with snitch motion, as long as it happens early in the video. The hardest videos for both tend to be ones with sudden motion of the snitch towards the end of the video, as shown by the bright golden trail denoting the motion towards the end (better viewed in supplementary video). These observations are supported by the quantitative plots in Figure 4 (a) and (c). 5 CONCLUSION We use CATER to analyze several leading network designs on hard spatiotemporal tasks. We find most models struggle on our proposed dataset, especially on the snitch localization task which requires long term reasoning. Interestingly, average pooling clip predictions or short temporal cues (optical flow) perform rather poorly on CATER, unlike most previous benchmarks. Such temporal reasoning challenges are common in the real world (eg. Fig. 1 (a)), and solving those would be the cornerstone of the next improvements in machine video understanding. We believe CATER would serve as an intermediary in building systems that will reason over space and time to understand actions. That said, CATER is, by no means, a complete solution to the video understanding problem. Like any other synthetic or simulated dataset, it should be considered in addition to real world benchmarks. While we have focused on classification tasks for simplicity, our fully-annotated dataset can be used for much richer parsing tasks such as spacetime action localization. One of our findings is that while high-level semantic tasks such as activity recognition may be addressable with current architectures given a richly labeled dataset, “mid-level” tasks such as tracking still pose tremendous challenges, particularly under long-term occlusions and containment. We believe addressing such challenges will enable broader temporal reasoning tasks that capture intentions, goals, and causal behavior. ACKNOWLEDGMENTS Authors would like to thank Ishan Misra for many helpful discussions and help with systems. This research is based upon work supported in part by NSF Grant 1618903. B TRAIN/VAL DISTRIBUTIONS Figure 6 shows the data distribution over classes for each of the tasks. C VIDEO VISUALIZATION The supplementary video3 visualizes: 1. Sample videos from the dataset (with and without camera motion). 2. Easiest and hardest videos for task 3. We rank all validation videos for task 3 based on their softmax probability for the correct class. We show the top-6 (easiest) and bottom-6 (hardest) for 32-frame stride-8 non-local + LSTM model. We observe the hardest ones involve sudden motion towards the end of the video. This reinforces the observation made in Figure 5(a) in the main paper, that videos where snitch keeps moving till the end are the hardest. If the snitch stops moving earlier, models have more evidence for the final location of the snitch, making the task easier. 3. Tracking results. We visualize the results of tracking the snitch over the video as one approach to solving task 3. We observe that while it works in the simple scenarios, it fails when there is a lot of occlusion or complex contain operations. 4. Model bottom-up attention. We visualize where does the model look for Task 3. As suggested in (Malinowski et al., 2018), we visualize the l2-norm of the last layer features from our 32-frame stride-8 non-local model on the center video crop. The deep red color denotes large norm value at that spatiotemporal location. We find that the model automatically learns to focus on the snitch towards the end of clips, which makes sense as that is the most important object for solving the localization task. 3https://rohitgirdhar.github.io/CATER/assets/suppl/video.mp4
1. What is the focus and contribution of the paper on video understanding? 2. What are the strengths of the proposed synthetic dataset (CATER) compared to other existing video datasets? 3. How does the reviewer assess the effectiveness of the three tasks customized for temporal understanding? 4. What are the weaknesses or limitations of the paper regarding its claims and comparisons with other works? 5. Are there any concerns about the generalizability of the results achieved by the state-of-the-art video understanding models on CATER?
Review
Review This paper proposed a new synthetic dataset (CATER) for video understanding. The authors argue that since current video datasets are heavily biased over static scenes and object structures, it is unclear whether modern spatial-temporal video models can learn to reason over temporal dimension. In order to address this problem, they design this fully observable synthetic dataset which is built upon CLEVER, along with three tasks that are customized for temporal understanding. They further conduct a variety of experiments to benchmark state-of-the-art video understanding models and show how those models more or less struggle on temporal reasoning. Overall this paper is well-written and easy to follow. The problem is well-motivated, and the claims are mostly supported. The diagnosis in this paper provides useful insights that could be contributive to both vision and learning communities. My primary concern is to what extent can the new dataset (CATER) add to existing video datasets that are also explicitly designed for long term spatial-temporal reasoning, such as video VQA datasets TGIF-QA[1]/SVQA[2]. In addition to the comparison between CATER and three action recognition datasets (Kinetics/UCF101/HMDB51) as presented in Table 3., it would be more interesting to see how video understanding models that are specifically designed for those video VQA datasets will perform on CATER. [1] Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. Tgif-qa: Toward spatio-temporal reasoning in visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2758–2766, 2017. [2] Xiaomeng Song, Yucheng Shi, Xin Chen, and Yahong Han. Explore multi-step reasoning in video question answering. In 2018 ACM Multimedia Conference on Multimedia Conference, pages 239–247. ACM, 2018.
ICLR
Title Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations Abstract Spurious correlations pose a fundamental challenge for building robust machine learning models. For example, models trained with empirical risk minimization (ERM) may depend on correlations between class labels and spurious features to classify data, even if these relations only hold for certain data groups. This can result in poor performance on other groups that do not exhibit such relations. When group information is available during training, Sagawa et al. (2019) have shown how to improve worst-group performance by optimizing the worst-group loss (GDRO). However, when group information is unavailable, improving worst-group performance is more challenging. For this latter setting, we propose Correct-NContrast (CNC), a contrastive learning method to train models more robust to spurious correlations. Our motivating observation is that worst-group performance is related to a representation alignment loss, which measures the distance in feature space between different groups within each class. We prove that the gap between worst-group and average loss for each class is upper bounded by the alignment loss for that class. Thus, CNC aims to improve representation alignment via contrastive learning. First, CNC uses an ERM model to infer the group information. Second, with a careful sampling scheme, CNC trains a contrastive model to encourage similar representations for groups in the same class. We show that CNC significantly improves worst-group accuracy over existing state-of-the-art methods on popular benchmarks, e.g., achieving 7.7% absolute lift in worst-group accuracy on the CelebA data set, and performs almost as well as GDRO trained with group labels. CNC also learns better-aligned representations between different groups in each class, reducing the alignment loss substantially compared to prior methods. 1 INTRODUCTION For many tasks, deep neural networks are negatively affected by spurious correlations—dependencies between observed features and class labels that only hold for certain groups of the data. For example, consider classifying images of cows or camels, where 90% of cow images depict grassy backgrounds. A model may learn to predict the “cow” class based on the background, and incorrectly classify cow images with non-grass backgrounds as camels (Ribeiro et al., 2016; Beery et al., 2018; Kaufman et al., 2012). This illustrates a widespread issue where neural networks can achieve low test error on certain groups, yet high error on others (Blodgett et al., 2016; Buolamwini & Gebru, 2018; Hashimoto et al., 2018; Sagawa et al., 2019). Prior works have shown that this problem is increasingly aggravated as the correlations between class labels and spurious features become stronger (Sagawa et al., 2020) and easier to learn (Arpit et al., 2017; Hermann & Lampinen, 2020). Since spurious correlations arise in many settings, we wish to design robust methods that perform well on all groups. How can we obtain neural networks robust to spurious correlations? If group-defining information (i.e. spurious attributes) is known, a common solution is to minimize the worst-group loss, e.g., with group DRO (GDRO) (Sagawa et al., 2019). However, such information may be expensive to collect, and we may not know the spurious attributes a priori in a given data set (Oakden-Rayner et al., 2020). When group information is unavailable, prior works typically take a two-stage approach. They first train an ERM model, and then use this model to infer groups and train a more robust model. For example, Sohoni et al. (2020) find that ERM models still learn group-specific features when trained to predict class labels. After first training an ERM model, they infer groups by clustering the ERM Under review as a conference paper at ICLR 2022 Landbird, Land BG ✅ ✅ ✅ ✅ ERM Ours Landbird, Water BG Landbird, Land BG Waterbird, Land BG Waterbird, Water BG Landbird, Water BG Landbird, Land BG Waterbird, Land BG Waterbird, Water BG ❎ ✅❎✅ ERM ✓ ✗ GradCAM Correct? Correct? “Waterbird” “Landbird”“Waterbird” “Waterbird” “Landbird” “Waterbird” ✗ Sample contrastive batches “Landbird” “Waterbird” “Waterbird” model’s representations, and train a new model with GDRO using these inferred groups. Creager et al. (2021) identify groups under which an initial trained ERM model would maximally violate the invariant risk minimization (IRM) objective (Arjovsky et al., 2019). With these groups they train a new model with GDRO or IRM. Nam et al. (2020); Liu et l. (2021) observe that ERM models often misclassify data points in minority groups, and thus train another model with re-weighted or upsampled points misclassified by an initial ERM model. While these methods promisingly leverage ERM learned biases to significantly improve worst-group error without training group labels, there is still a gap between their robust performance and methods’ such as GDRO that use group labels. In this work, we ask how else we can improve model robustness using a trained ERM model, and aim to close this gap by focusing on improving the learned representations of the robust model in the second stage. We support this direction with two key motivations. First, we find that higher worst-group performance consistently correlates with hidden-layer representations exhibiting higher dependence on class labels than spurious attributes. We quantify this correlation using geometric representation alignment (Wang & Isola, 2020), which measures the closeness of samples with the same class but different spurious attributes in the model feature space, and mutual information. This relation consistently holds across various data sets, and explains when prior upweighting methods improve worst-group error over ERM (Fig. 4). Second, we theoretically show that a model’s representation alignment for a given class can be used to upper bound the gap between its worst-group and average loss for that class. Thus, if we can improve representation alignment for a class, we can reduce the gap between worst-group and average loss for that class. We thus propose Correct-N-Contrast (CNC), a two-stage procedure using contrastive learning to encourage better representation alignment within each class. In the first stage, we train a regularized ERM model similar to prior work (Liu et al., 2021; Creager et al., 2021), under the premise that ERM predictions help infer group information (i.e., spurious attributes). In the second stage, we wish to improve representation alignment by “pulling together” same-class datapoints and “pushing apart” different-class datapoints, regardless of their individual groups or spurious features. To do so via supervised contrastive learning, we use the heuristic that samples with the same ERM predictions exhibit similar spurious features (and vice versa). With a randomly sampled anchor, we select samples with the same class but different ERM predictions as “positives” we want to pull together, and samples from different classes but the same ERM prediction as hard “negatives” we want to push apart. Training a second model with this sampling scheme and supervised contrastive learning encourages this model to ignore spurious correlations that the initial ERM model learned, and improves representation alignment between same-class data points. Thus, CNC corrects for the ERM model’s mistakes with contrastive learning in the second model. We evaluate CNC on four popular and diverse spurious correlation benchmarks. Among methods that similarly do not assume training group labels, CNC substantially improves worst-group accuracy, obtaining up to 7.7% absolute lift (from 81.1% to 88.8% on CelebA) over the prior state-of-the-art JTT (Liu et al., 2021), and averaging 3.4% lift across the four tasks. We also find that CNC nearly closes the gap in worst-group accuracy with robust training methods that assume training group labels, only falling short of GDRO’s worst-group accuracy by 0.8% absolute. Finally, we validate that CNC indeed reduces the alignment loss compared to prior methods. This corresponds to an up to 71.1% smaller gap between worst-group versus average accuracy for data points in the same class. Contributions. We summarize our contributions as follows: 1. We empirically show that a model’s worst-group performance correlates with the model’s alignment loss between different groups within a class, and analyze this connection theoretically. 2. We propose CNC, a two-stage contrastive approach to improve representation alignment and thereby learn representations robust to spurious correlations. 3. We validate that CNC significantly improves worst-group accuracy over existing methods on various benchmarks, and learns better-aligned representations less reliant on spurious features. 2 PRELIMINARIES Problem setup. We present our setting and the loss objectives following Sagawa et al. (2019). Let X = {x1, . . . , xn} and Y = {y1, . . . , yn} be a training data set of size n. Each data point has an observed feature vector xi ∈ X , label yi ∈ Y , and unobserved spurious attribute ai ∈ A. The set of groups G is defined as the set of all combinations of class label and spurious attribute pairs, i.e. G = Y ×A. Let C = |Y| be the number of classes and K = |G| be the number of groups. Following the classical supervised learning setting, we assume that each example (xi, yi, ai) is drawn from an unknown joint distribution P . We assume that at least one sample from each group is observed in the training data. Let Pg be the distribution conditioning on (y, a) = g, for any g ∈ G. Given a model fθ : X 7→ RC and a convex loss ` : X × Y 7→ R, let the worst-group loss be: Lwg(fθ) := max g∈G E(x,y,a)∼Pg [`(fθ(x), y)]. (1) ERM minimizes the training loss as a surrogate for the expected population loss Lavg: Lavg(fθ) := E(x,y,a)∼P [`(fθ(x), y)] (2) While ERM is the standard way to train neural nets, spurious correlations often cause ERM to obtain high error on minority groups even when average error is low. Group DRO, which minimizes the empirical version of (1), is recognized as a strong baseline for improving worst-group error when the group labels {a1, . . . , an} are available during training (Sagawa et al., 2019). In contrast, we focus on the more challenging setting in which the group labels are not available during training. Contrastive learning. We briefly describe contrastive learning (Chen et al., 2020), a central component of our approach. Let fθ be a neural network model with parameters θ. Let the encoder fenc : X 7→ Rd be the feature representation layers of fθ. Let fcls : Rd 7→ RC be the classification layer of fθ, which maps encoder representations to one-hot label vectors. We learn fenc with the supervised contrastive loss Lsupcon proposed in Khosla et al. (2020). For each anchor x, we sample M positives {x+i }Mi=1 and N negatives {x − i }Ni=1. Let y, {y + i }Mi=1, {y − i }Ni=1 be the labels and z, {z+i }Mi=1, {z − i }Ni=1 be the normalized outputs of fenc(x) for the anchor, positives, and negatives respectively. With input x mapped to z, the training objective for the encoder is to minimize: Lsupcon(x; fenc) = E x,{x+i }Mi=1,{x − i }Nj=1 [ − log exp(z >z+i /τ)∑M m=1 exp(z >z+m/τ) + ∑N n=1 exp(z >z−n /τ) ] (3) where τ > 0 is a scalar temperature hyperparameter. Minimizing Eq. 3 leads to z being closer to z+ than z− in feature space. See Sec. 6 for further references related to contrastive learning. 3 MOTIVATIONS FOR REPRESENTATION ALIGNMENT To motivate our method, we present our core observation that a model’s worst-group accuracy correlates with how well its learned representations depends on the class labels, but not the spurious attributes. First, we empirically observe that ERM learns spurious correlations by inspecting their hidden layer representations on several spuriously correlated data sets. We find that ERM’s worstgroup performance is inversely related to a cross-group alignment loss (cf. Eq. (4) below) and mutual information metrics. Second, we theoretically prove that this alignment loss serves as an upper bound on the gap between the average-group loss and the worst-group loss (cf. Theorem 3.1). 3.1 RELATING WORST-GROUP PERFORMANCE TO REPRESENTATION ALIGNMENT We first show that when neural networks are trained with standard ERM on spuriously correlated data, their hidden layer representations exhibit high dependence on the spurious attribute. We quantify this behavior using representation alignment (cf. Eq. (4) below) and mutual information metrics. We observe that these metrics explain trends in ERM’s worst-group accuracy on various spuriously correlated data sets. This relationship is also consistent and applies to upsampling methods (JTT) that mitigate the impact of spurious features (Liu et al., 2021). We model spurious correlations with CMNIST∗, a colored MNIST data set inspired by Arjovsky et al. (2019). There are 5 digit classes and 5 colors. We color a fraction pcorr of the training samples with a color a associated with each class y, and color the test samples uniform-randomly. To analyze learned representations, we train a LeNet-5 CNN (LeCun et al., 1989) with ERM to predict digit classes, and inspect the outputs of the last hidden layer z = fenc(x). As shown in Fig. 2, with low pcorr, models learn representations with high dependence on the actual digit classes. However, with high pcorr we learn z highly dependent on a, despite only training to predict y. Representation metrics. To quantify this behavior, we use two metrics designed to capture how well the learned representations exhibit dependence on the class label vs. the spurious attributes. First, we compute an alignment loss L̂align(fenc; g, g′) between two groups g = (y, a) and g′ = (y, a′) where a 6= a′. This measures how well fenc maps samples with the same class, but different spurious attributes, to nearby vectors via Euclidean distance. Letting G and G′ be the subsets of training data in groups g and g′ respectively, and x and x′ be any two samples in G and G′, we define: L̂align(fenc; g, g′) := 1 |G| 1 |G′| ∑ (x,y,a)∈G ∑ (x′,y,a′)∈G′ ‖fenc(x)− fenc(x′)‖2. (4) Thus, lower L̂align means better alignment. We also quantify representation dependence by estimating the mutual information (MI) of a model’s learned representations with the class label, i.e. Î(Y ;Z) and the spurious attributes Î(A;Z). We defer computational details to Appendix E. (b) (e) (f) (g) (h) Results for ERM. In Fig. 3 we show a strong association between worst-group error and both alignment and mutual information metrics. As pcorr increases, ERM models not only drop in worst-group accuracy, but also incur higher alignment loss (Fig. 3ab). Fig. 3c further illustrates this with mutual information. We plot the estimated mutual information and worst-group accuracy for models at each epoch. A substantial drop in worst-group accuracy occurs with high Î(A;Z) (especially when Î(A;Z) > Î(Y ;Z), even with high Î(Y ;Z)). Fig. 3d also captures this trend with a trade off between high Î(Y ;Z) with Î(A;Z) as pcorr increases (Fig. 3a). 4 Results for JTT. In Fig. 4, we also show that this relation holds when training with another recent (upsampling) approach, JTT (Liu et al., 2021). With high pcorr, models now achieve higher worstgroup accuracy, and this corresponds to learning representations with high class label and low spurious attribute dependence. We note however that previous approaches do not explicitly optimize for these representation metrics, suggesting a new direction to improve worst-group performance. 3.2 RELATING ALIGNMENT LOSS TO WORST-GROUP LOSS The empirical observations in Fig. 3 suggest that lower alignment loss correlates with lower worstgroup error. Next, we show that this connection applies much more generally. We show that the maximum of L̂align(fenc; g, g′), over any two groups g, g′ within the same class, can be used to upper bound the gap between the worst-group loss and average loss for that class. We set up several notations before stating the result. For any class label y ∈ Y , let Gy be the set of groups with label y in G. Let Lwg(fθ; y) be the worst-group loss among groups in Gy: Lwg(fθ; y) := max g∈Gy E (x,ỹ,a)∼Pg [`(fθ(x), ỹ)] . Let Lavg(fθ; y) be the average loss among groups in Gy: Lavg(fθ; y) := E (x,ỹ,a)∼P :∀a∈A [`(fθ(x), ỹ)] . Additionally, we define a class-specific alignment loss L̂align(fenc; y) among groups in Gy . Recall that fθ involves an encoding function fenc and a linear classification layer fcls. We define L̂align(fenc; y) as the largest cross-group alignment loss among groups in Gy: L̂align(fθ; y) := max g∈Gy,g′∈Gy : g 6=g′ L̂align(fenc; g, g′). (5) where L̂align(fenc; g, g′) is the alignment loss between g and g′ defined in Eq. (4). Our main result is that L̂align(fθ; y) is an upper bound on the gap between Lwg(fθ; y) and Lavg(fθ; y) (up to a norm multiplier and a concentration error), for any y ∈ Y . Theorem 3.1 (Alignment loss upper bounds the gap between worst-group and average-group loss). In the setting described above, let fθ be any neural network satisfying that the weight matrix of the linear classification layer W in fcls satisfies that ‖W‖2 ≤ B, for some constant B. Let ng be the size of any group g ∈ G in the training data set. Assume that the loss function `(x, y) is C1-Lipschitz in x and bounded from above by C2, for some positive constants C1, C2. Then, with probability at least 1− δ over the randomness of the training data set samples, for any class y ∈ Y , the following holds: Lwg(fθ; y) ≤ Lavg(fθ; y) +B · C1 · L̂align(fθ; y) + max g∈Gy C2 √ 8 log(|Gy|/δ) ng . (6) The proof of Theorem 3.1 is deferred to Sec. B. Since we also know that Lavg(fθ; y) ≤ Lwg(fθ; y), the above result implies that in order to reduce the gap between the worst-group loss and the average loss for class y, it suffices to reduce the alignment loss L̂align(fθ; y). Broader algorithmic implications. We summarize Section 3 with two takeaways: (1) When trained on spuriously correlated data sets, ERM networks learn data representations highly dependent on spurious attributes. Clusters of these representations (Sohoni et al., 2020) or the ERM model’s outputs (Liu et al., 2021; Nam et al., 2020) can thus serve as (noisy) pseudolabels for spurious attributes. (2) Both representation metrics correlate with worst-group error, such that a viable way to improve worst-group performance is to improve representation alignment within each class. 4 CORRECT-N-CONTRAST (CNC) We now present CNC, a two-stage method to improve worst-group performance and robustness to spurious correlations, without requiring training group labels. Similar to prior works (Sohoni et al., 2020; Liu et al., 2021), our first stage trains an ERM model (with proper regularization1) on the training set, ultimately to infer group labels based on samples’ spurious attributes. 1As we train on the same data set we infer the groups on, regularization (via high weight decay or early stopping) is purely to prevent the ERM model from memorizing the class labels. This is standard practice also discussed in Sohoni et al. (2020); Liu et al. (2021). We show in Sec. 5.3 that we do not require the ERM model to perfectly learn the spurious attributes for CNC to substantially improve robustness in practice. Algorithm 1 Correct-N-Contrast (CNC) Input: Training data set (X,Y ); # positives M ; # negatives N ; learning rate η, # epochs K. Stage 1: ERM Training 1: Train a regularized ERM model fθ̂ on (X,Y ); save the predictions ŷi := fθ̂(xi). Stage 2: Supervised contrastive learning 2: for each epoch 1, . . . ,K do 3: for each anchor (x, y) ∈ (X,Y ) do 4: Let ŷ be the predicted (group) label of x from Stage 1’s ERM model. 5: Get M positives {(x+m, y+m)} where y+m = y but ŷ+m 6= ŷ, for m = 1, . . . ,M . 6: Get N negatives {(x−q , y−q )} where y−q 6= y but ŷ−q = ŷ, for q = 1, . . . , N . 7: Update fθ by θ ← θ − η · ∇L̂(fθ;x, y) (cf. Eq. (7)) with anchor, M positives, and N negatives. return final model fθ from Stage 2, and throw away the ERM model from Stage 1. The key difference is our second stage: we aim to train a more robust model by learning representations such that samples in the same class but different groups are close to each other. We use contrastive learning, as intuitively by treating samples with the same class but different spurious attributes as distinct “views” of the same class, we train the second stage model to “pull together” these samples’ representations and ignore the different spurious features. This is also inspired by Wang & Isola (2020); Robinson et al. (2021), who show that minimizing the contrastive loss improves representation alignment between distinct “views”. Later in Sec. 5.1, we verify that CNC indeed reduces L̂align(fθ; y) substantially. We include further details on both stages below, and summarize CNC in Algorithm 1. Stage 1: ERM training. We train an initial model fθ̂ on the training data set {(xi, yi)} n i=1 with ERM and regularization, and save its predictions {ŷi}ni=1 on the training data points. We consider two ways to source predictions: using the ERM model’s outputs, and clustering its last hidden-layer representations. Both approaches aim to accomplish the same goal of exploiting the ERM model’s learned spurious correlations; further details are in Appendix E.2. Stage 2: Contrastive learning (CL). Next, we train a robust model with supervised contrastive learning using the ERM predictions. While CNC is inspired by recent CL works (Chen et al., 2020; Khosla et al., 2020), we introduce new “contrastive batch” sampling and optimization objectives. Contrastive batch sampling. As described in Sec. 2, contrastive learning requires sampling anchors, positives, and negatives with the general form {x}, {x+}, {x−}. Here, we wish to sample points such that by maximizing the similarity between anchors and positives (and keeping anchors and negatives apart), the Stage 2 model “ignores” spurious similarities while learning class-consistent dependencies. With prediction set {ŷi}ni=1, for each batch we randomly sample an anchor xi ∈ X (with label yi and ERM prediction ŷi), M positives with the same class as yi but a different ERM model prediction than ŷi, and N negatives with different classes as yi but the same ERM model prediction as ŷi. For more signal per batch, we double pairwise comparisons by switching anchor and positive roles. Optimization objective and updating procedure. While our core objective is to learn aligned representations via contrastive learning, we also wish to train the full model to classify datapoints correctly. As we have the training class labels, we jointly update both the model’s encoder layers fenc with a standard contrastive loss, and the full model fθ with a cross-entropy loss: L̂(fθ;x, y) = λL̂supcon(fenc;x, y) + (1− λ)L̂cross(fθ;x, y). (7) In the above, L̂supcon(fenc;x, y) is the supervised contrastive loss of x along with its positive and negative samples, similar to Eq. (3) (see Eq. (16) in Sec. C.2 for the full equation); L̂cross(fθ;x, y) is averaged cross-entropy loss over x, the M positives, and the N negatives; λ ∈ [0, 1] is a balancing hyperparameter. As a remark, the loss objective (7) uses a single anchor in each batch in our setting. To calculate the loss, we first forward propagate one batch ( xi, {x+m}Mm=1, {x−q }Nq=1 ) through fenc and normalize them to obtain representation vectors ( zi, {z+m}Mm=1, {z−q }Nq=1 ) . To learn closely aligned zi and z+ for all {z+m}Mm=1, we update fenc with the L̂ sup out (x; fenc) loss. Finally, we also pass the unnormalized outputs of the encoder fenc to the classifier layers fcls, and compute a batch-wise cross-entropy loss L̂cross(fθ) using each batch sample’s class labels and fθ’s outputs. Due to space constraints, we include further implementation details and sampling considerations in Appendix C. 5 EXPERIMENTAL RESULTS We conduct experiments to answer the following questions: (1) Does CNC improve worst-group performance over prior state-of-the-art methods on data sets with spurious correlations? (2) Does CNC actually encourage learning hidden layer representations with greater alignment and class-labelonly dependence? How is this impacted by the strength of a spurious correlation in the data? (3) Does CNC require perfectly predicting the spurious attribute to work well in practice? Our results for each question follows in the next three subsections (5.1, 5.2, and 5.3). Due to space constraints, we defer ablations on CNC’s design choices, including the representation-learning objective and sampling procedure, to Appendix A. Additional comparison to alignment methods proposed for domain adaptation but adjusted for our setting are in Appendix A.2. Below, we briefly describe the benchmark data sets used in this section. We run CMNIST∗ with pcorr = 0.995. Further details on data sets, models, and experimental hyperparameters are deferred to Appendix E. Waterbirds (Sagawa et al., 2019): We classify Y = {waterbird, landbird}, where 95% of images have the same bird type and background A = {water background, land background}. CelebA (Liu et al., 2015): We classify celebrities’ hair colorY = {blond, not blond}withA = {male, female}. Only 6% of blond celebrities in the data set are male. CivilComments-WILDS (Borkan et al., 2019; Koh et al., 2021): We classify Y = {toxic, not toxic} comments. A denotes whether the comment mentions one of eight demographic identities. 5.1 CNC IMPROVES WORST-GROUP PERFORMANCE To study (1), we evaluate CNC on image classification and NLP data sets with spurious correlations. As baselines, we compare against standard ERM and an oracle GDRO approach that assumes access to the group labels. We also compare against recent methods that tackle spurious correlations without requiring group labels: CVaR DRO (Levy et al., 2020), GEORGE (Sohoni et al., 2020), Learning from Failure (LfF) (Nam et al., 2020), Predictive Group Invariance (PGI) (Ahmed et al., 2021), Environment Inference for Invariant Learning (EIIL) (Creager et al., 2021), Contrastive Input Morphing (CIM) (Taghanaki et al., 2021), and Just Train Twice (JTT) (Liu et al., 2021). We also compare against a CNC version without the Stage 1 ERM model, instead only sampling positives and negatives based on class (denoting this SupCon*). Results are reported in Table 1. CNC achieves highest worst-group accuracy among all methods without training group labels on the CMNIST∗ Waterbirds and CelebA data sets, while also obtaining near-SoTA worst-group accuracy on CivilComments. While LfF, GEORGE, PGI, EIIL, and JTT similarly use a trained ERM model to estimate groups, CNC uniquely uses ERM predictions to encourage the robust model to learn desirable representations via contrastive learning. We reason that with this approach, by sampling positives and negatives from the ERM predictions, CNC more directly encourages the robust model to ignore learnable spurious correlations compared to previous invariant learning, input transformation, or upweighting approaches. We include additional evidence of this via GradCAM visualizations in Appendix G. 5.2 CNC LEARNS REPRESENTATIONS LESS RELIANT ON SPURIOUS FEATURES To shed light on CNC’s worst-group accuracy gains, we investigate if models trained with CNC actually learn representations with higher alignment. Compared to ERM and JTT (the next-best performing method that does not require subgroup labels), CNC learns representations with significantly higher alignment (lower alignment loss) and lower mutual information with spurious attributes (while having comparable mutual information with class labels) (Fig. 5 and Fig. 7). We find that CNC representations exhibit the lowest alignment loss consistently for these data sets; this also corresponds to CNC models achieving the highest worst-group accuracy. Furthermore, while all methods result in representations that exhibit high mutual information with the class label (Fig. 5b), only CNC results in representations that drastically reduce mutual information with spurious attributes (Fig. 5c). In Fig. 6, we also illustrate this result on the Waterbirds data set via UMAP visualizations of the learned representations. Notably, all training methods result in representations separable by class label. Yet ERM models exhibit strong separability by spurious attributes, and JTT models interestingly also still depict some learned dependency on the spurious attribute. However, CNC uniquely learns representations that strongly depict class-label-only dependence. In addition, to study how this relation between representation metrics and worst-group accuracy scales with the strength of the spurious correlation, we compute representation metrics with CNC, ERM, and JTT models trained on increasingly spurious (↑ pcorr) CMNIST∗ data sets in Fig. 7. We observe that with high spurious correlations, ERM fails to classify digits in the minority classes, while CNC and JTT comparably maintain high worst-group accuracy. CNC also performs better in more spurious settings (pcorr > 0.95). These improvements over ERM are reflected by drops in alignment loss (averaged over classes); CNC consistently achieves lowest such loss. Fig. 7c shows that CNC’s learned representations maintain a more favorable balance of mutual information between the class label and spurious attribute than JTT. While JTT models exhibit slightly higher estimated I(Y ;Z) than CNC models, CNC models exhibit much lower dependence on the spurious attribute. 5.3 UNDERSTANDING CNC’S SENSITIVITY TO STAGE 1 PREDICTIONS Finally, we study how sensitive CNC is to how closely the Stage 1 ERM model actually predicts the spurious attribute. As JTT also relies on an initial ERM model’s predictions, we compare CNC to JTT in this regard. We find that CNC is more robust to noisy ERM predictions than JTT, and that CNC does not require perfectly inferred groups to perform well. We first conduct an ablation on CNC and JTT’s worst-group and average performance in Fig. 7d with the following synthetic experiment. On CMNIST∗, we start with the true spurious attribute labels as the Stage 1 “predictions". We then gradually degrade their quality as follows: for each point, with 6 RELATED WORK We build on prior work in group robustness and contrastive learning. Further discussion is in App. D. Robustness to group shift. A variety of approaches aim to improve performance on minority data groups. If group labels are known, many works minimize a rebalanced error similar in motivation to correcting class imbalance (He & Garcia, 2009; Cui et al., 2019) or importance weighting (Shimodaira, 2000; Byrd & Lipton, 2019). More recently, Sagawa et al. (2019) minimize worst-group loss during training. Goel et al. (2020) achieve further lift by synthetically generating additional minority group points. Cao et al. (2019) regularize updates on minority groups to improve their generalization. Another line of work aims to improve group robustness without assuming group labels for the training data. The most similar methods to CNC first train an initial ERM model with class labels as a way to infer groups, and then use these groups to train a second model with better worst-group performance. GEORGE (Sohoni et al., 2020) clusters ERM representations, and runs GDRO with these clusters as inferred groups. EIIL (Creager et al., 2021) and PGI (Ahmed et al., 2021) infer groups that maximally violate an invariance objective for the ERM model. With these groups EIIL uses either GDRO or Invariant Risk Minimization (Arjovsky et al., 2019) to train a second robust model, while PGI minimizes the KL divergence of the softmaxed logits for samples in the same class but different groups. LfF (Nam et al., 2020) use a generalized cross-entropy loss to encourage misclassifying minority groups, concurrently training a second model with these datapoints upweighted. JTT (Liu et al., 2021) trains via ERM for a few epochs, before training a second ERM model with incorrect datapoints upsampled. For image data sets, CIM (Taghanaki et al., 2021) trains a transformation network to remove potentially spurious attributes from input features. Contrastive learning (CL). CL works by predicting whether two inputs are “similar” or “dissimilar” (Le-Khac et al., 2020). This involves specifying batches of anchor and positive datapoints similar to each other (as different “views” of the same source or input), and negatives depicting dissimilar points. An encoder is trained to simultaneously maximize the similarity between the feature representations of anchors and positives, and minimize similarity between anchor and negative representations. In unsupervised CL, “negatives” are often sampled uniformly (Bachman et al., 2019), while “positives” are different views of the same object, e.g. via data augmentation (Chen et al., 2020). In supervised CL, negatives are different-class points and positives are same-class points (Khosla et al., 2020). In CNC, we instead treat same-class points with different ERM predictions as positives, and differentclass points with the same ERM prediction as negatives. This naturally provides “hard negative mining,” a challenge for standard CL (Robinson et al., 2021; Wu et al., 2021; Chuang et al., 2020). 7 CONCLUSION We present CNC, a two-stage CL approach to learn representations robust to spurious correlations. We theoretically analyze the connection between alignment and worst-group vs. average-group losses, and show that CNC achieves SOTA or near-SOTA worst-group accuracy across several benchmarks. ETHICS STATEMENT We hope that our work is another step towards the important goal of making machine learning models more fair and robust. However, while our work successfully improves worst-group accuracy, this is not necessarily an end-all be-all metric - other fairness-based metrics may be more suitable in certain settings. Also, misuse of metrics could lead to potential harm. To avoid these pitfalls, it is important for practitioners to understand the limitations and tradeoffs of different metrics, including when applying methods such as ours. REPRODUCIBILITY STATEMENT We have submitted our code as part of the supplementary materials. The datasets we use are publicly available (with the exception of CMNIST∗ which is a modification of the standard MNIST dataset (LeCun et al., 2010); our code to generate this modified dataset is also included). In addition to the details provided in Section 5, further implementation, dataset, and experimental details can be found in Appendix E. For the theory, we include complete proofs of all claims in Appendix B. A ADDITIONAL BENCHMARK COMPARISONS AND ABLATIONS In this section, we include further experiments comparing CNC against additional related methods. We also include additional ablations to study the importance of CNC’s presented design choices. A.1 COMPARISON TO MINIMIZING THE ALIGNMENT LOSS DIRECTLY In Sec. 5.1 and Sec. 5.2, we empirically showed that CNC’s contrastive loss and hard positive and negative sampling lead to improved worst-group accuracy and greater representation alignment. We now study how CNC performs if instead of the contrastive loss, we train the Stage 2 model to minimize Lalign directly. With this objective, we aim to minimize the Euclidean distance between samples in different inferred groups but the same class. We keep all other components of CNC consistent, and apply Lalign to the anchor and positive samples in each contrastive batch. We report results on CMNIST∗, Waterbirds, and CelebA in Table A.1. We find that CNC with the default contrastive loss outperforms CNC with the alignment loss. We reason that an advantage of the contrastive loss (and specifically the “hard” positive and negative samples), is that it encourages aligning samples with the same class label but different spurious features, and pushes apart hard negative samples with different class labels but similar spurious features. This provides additional signal for improving separation between the different classes, so the robust model only learns to rely on ground-truth-specific features for discriminating between datapoints. On the other hand, the Lalignment objective does not incorporate these hard negatives. A.2 COMPARISON TO REPRESENTATION ALIGNMENT METHODS FOR DOMAIN GENERALIZATION AND ADAPTATION While our main results in Table 1 compare against methods designed to tackle the spurious correlations setting presented in Section 5.1, we now study how CNC fares against existing representation alignment methods proposed in the domain generalization (DG) and unsupervised domain adaptation (UDA) literature. At a high level, a popular idea in DG and UDA is to learn similar representations for datapoints with the same class but sampled from different domains, e.g. via adversarial training to prevent another model from classifying representations’ source domains correctly (Ganin et al., 2016), or minimizing representation differences via metrics such as maximum mean discrepancy (MMD) (Li et al., 2018). While DG and UDA carry distinct problem settings and assumptions from our spurious correlations setting (c.f. Appendix D.4), we aim to understand if existing representation alignment methods can train models robust to spurious correlations, and compare their performance with CNC. We first explain our protocol for evaluating these methods, and then discuss results. We carry out our evaluation with domain-adversarial neural networks (DANN) Ganin et al. (2016), a seminal UDA method that aims to learn aligned representations across two domains. To do so, DANN jointly trains a model to classify samples from a “source” domain while preventing a separate “domain classifier” module from correctly classifying the domain for datapoints sampled from both domains. For fair comparison, we use the same ResNet-50 backbone as in CNC, and make several adjustments to the typical DANN and UDA procedure: 1. While UDA assumes that the data is organized into “source” and “target” domains, we do not have domain labels. We thus infer domains using the predictions of an initial ERM model as in CNC. 2. The notion of a domain may also be ambiguous with respect to the groups defined in Section 2. For example, domains may be defined by spurious attributes (e.g., for the Waterbirds dataset, we may consider the “water background” domain and the “land background” domain). Domains may alternatively be defined by whether samples carry dominant spurious correlations or not (e.g., the “majority group” domain and the “minority group” domain). We train and evaluate separate DANN models for both interpretations. We infer the former by the predicted class of the initial ERM model. We infer the latter by whether the initial ERM model is correct or not. 3. Finally, UDA aims to train with a class-labeled “source” domain and an unlabeled “target” domain such that a model performs well on unseen samples from the specified “target” domain (Ganin et al., 2016). However, our benchmarks have class labels for all training points, and do not have a notion of “source” and “target” domains (we aim to obtain high worst-group accuracy, which could fall under any domain). We thus assume access to labels for all domains. During training, the goal for our DANN models is to correctly classify samples from both domains, while learning representations such that a jointly trained domain classifier module cannot determine the samples’ domains from their representations alone. At test-time, we evaluate the DANN model on the entire test set for each benchmark, and report the worst-group and average accuracies. In Table A.2, we report the worst-group and average accuracies of DANN on the Waterbirds and CelebA datasets across three seeds along with the CNC results. Our results suggest that the domain alignment in DANN is not sufficient to improve worst-group accuracy. We hypothesize this is due to adversarial training with the domain classifier aligning representations without regard to different classes within each domain. Due to the propensity of samples exhibiting spurious correlations, DANN models may thus still learn to rely on these correlations. A.3 IMPORTANCE OF ERM-GUIDED CONTRASTIVE SAMPLING In this section we conduct additional ablations on the sampling procedure in CNC. Although CNC relies on an initial trained ERM model’s predictions, can we still improve worst-group accuracy without this step and with supervised contrastive learning alone, i.e. by sampling positives uniform randomly from all datapoints with the same label as the anchor? In Table 1, we showed that this approach (denoted SupCon∗) led to a drop in worst-group accuracy. Taking this question further, while we use the Stage 1 ERM model’s predictions to sample “hard” negatives with different groundtruth classes and the same ERM predictions as their anchors—such that to reduce the contrastive loss and learn dissimilar representations for anchors and negatives, the Stage 2 contrastive model must thus learn to ignore spurious features that the initial ERM model learns to depend on—how does CNC’s performance fare with alternative negative sampling procedures? Keeping the anchor and positive sampling consistent, we perform additional ablations where we either sample negatives only by having different classes as their anchors, or sample negatives only be having the same ERM model prediction as their anchors. We report these results in Table A.3 below. We find that the default CNC sampling procedure obtains highest worst-group accuracy and highest or near-highest average accuracy compared to alternative strategies across the CMNIST∗, Waterbirds, and CelebA datasets. The results suggests that inferring the spurious attributes (e.g. via an initial ERM model) is important for CNC, and that CNC benefits from using these predictions for sampling both negatives and positives. We reason this is because without this sampling, we can actually encourage the Stage 2 model to rely on spurious correlations. For example, if we just ensure that the anchor and negative samples have different classes, then the contrastive model may just rely on the different spurious features of the anchors and negatives to learn dissimilar representations. However, by ensuring that the anchors and negatives have similar spurious features (via the same trained ERM model prediction), the contrastive model is forced to rely on non-spurious features to learn dissimilar Negatives by different class 66.4 (5.1) 86.0 (1.6) 82.2 (0.8) 88.9 (0.3) 79.2 (0.3) 88.0 (0.1) Negatives by same prediction 70.0 (5.1) 87.1 (1.1) 85.7 (1.3) 90.3 (0.2) 81.1 (1.4) 88.5 (0.3) SupCon∗ 0.0 (0.0) 22.4 (1.2) 71.0 (1.9) 85.9 (0.8) 62.2 (1.1) 90.0 (0.1) CNC (default) 77.4 (3.0) 90.9 (0.6) 89.7 (0.2) 90.8 (0.1) 88.8 (0.9) 89.9 (0.5) representations for the samples. The same logic applies for learning similar representations for anchor and positive samples. We suspect that choosing negatives from all samples with the same ERM prediction as their anchors performs better than the other ablations as it alone does not encourage learning spurious correlations: the model is asked to “pull apart” samples with the same spurious features, and so must ignore spurious similarities to recognize something different between anchors and negatives. However, this ablation does not ensure that anchor-negative pairs consist of different classes (which our full method does), so the model gets less signal to separate samples by class. A.4 ADDITIONAL DESIGN CHOICE ABLATIONS We first summarize CNC’s design choices and differences from standard supervised contrastive learning in Appendix A.4.1. We then empirically validate each component in Appendix A.4.2. A.4.1 SUMMARY OF CNC DESIGN CHOICES AND PROPERTIES No projection network. As we wish to learn data representations that maximize the alignment between anchor and positive datapoints, we do not compute the contrastive loss with the outputs of an additional nonlinear projection network. This is inspired by the logic justifying a projection head in prior contrastive learning, e.g. SimCLR (Chen et al., 2020), where the head is included because the contrastive loss trains representations to be “invariant to data transformation” and may encourage removing information “such as the color or orientation of objects”. In our case, we view inferred datapoints with the same class but different spurious attributes as “transformations” of each other, and we hypothesize that removing these differences can help us improve worst-group performance. Two-sided contrastive sampling. To incorporate additional comparisons between datapoints that only differ in spurious attribute during training, we employ “two-sided” contrastive batch sampling. This lets us equally incorporate instances where the second contrastive model in CNC treats datapoints that the initial ERM model got incorrect and correct as anchors. Additional intrinsic hard positive/negative mining. Because the new model corrects for potentially learned spurious correlations by only comparing and contrasting datapoints that differ in class label or spurious attribute, but not both (as dictated by the initial ERM model’s outputs), the contrastive batches naturally carry “hard” positives and negatives. Thus, our approach provides a natural form of hard negative mining (in addition to the intrinsic hard positive / negative mining at the gradient level with InfoNCE-style contrastive losses (Chen et al., 2020; Khosla et al., 2020)) while avoiding class collisions, two nontrivial challenges in standard self-supervised contrastive learning (Robinson et al., 2021; Wu et al., 2021; Chuang et al., 2020). Joint training of encoder and classifier layers. CNC can train any standard classification model architecture; for any given neural network we just apply different optimization objectives to the encoder and classifier layers. We train both the encoder and classifier layers with a cross-entropy loss, and jointly train the encoder layer with a supervised contrastive loss. For the encoder layers, we balance the two objectives with a hyperparameter λ (c.f. Eq. 7). A.4.2 EMPIRICAL VALIDATION OF CNC COMPONENTS To validate the additional algorithmic components of CNC, we report how CNC performs on the Waterbirds dataset when modifying the individual design components. We use the same hyperpa- rameters as in the main results, and report accuracies as the average over three training runs for the following ablations. Table A.4 summarizes that across these design ablations, default CNC as presented consistently outperforms these alternative implementations. No projection head. We incorporate a nonlinear projection head as is typical in prior contrastive learning works (Chen et al., 2020), that maps the encoder output to lower-dimensional representations (from 2048 to 128 in our case). We then update the encoder layers and the projection head jointly by computing the contrastive loss on the projection head’s output, still passing the encoder layer’s direct outputs to the classifier to compute the cross-entropy loss. We note that using the projection head decreases worst-group accuracy substantially. We reason that as previously discussed, while using the projection head in prior work can allow the model to retain more information in its actual hidden layers (Chen et al., 2020), in our case to remove dependencies on spurious attributes we actually want to encourage learning invariant representations when we model the differences between anchor and positive datapoints as due to spurious attributes. Two-sided contrastive batches. Instead of “two-sided” contrasting where we allow both sampled anchors and positives to take on the anchor role, for each batch we only compute contrastive updates by comparing original positives and negatives with the original anchor. When keeping everything else the same, we find that just doing these one-sided comparisons also leads to a drop in performance for worst-group accuracy. This suggests that the increased number of comparisons and training setup where we swap the roles of anchors and positives of the two-sided batches introduces greater contrastive learning signal. Additional intrinsic hard positive/negative mining. We discuss this ablation in Section A.3. Joint training of encoder and classifier layers. Instead of training the full model jointly, we first only train the encoder layers with the contrastive loss in CNC, before freezing these layers and finetuning the classifier layers with the cross-entropy loss. With this implementation, we also obtain noticeable drop in performance. While we leave further analysis for the joint cross-entropy and contrastive optimization for future work, one conjecture is that the cross-entropy loss may aid in learning separable representations while also training the full model to keep the average error small. From our theory, the contrastive loss can help bound the gap between worst-group and average error. Thus we try to minimize average error in the same parameter update. This also follows prior work, where updating the entire model and finetuning all model parameters instead of freezing the encoder layers leads to higher accuracy (Chen et al., 2020). However, we found that with an initial encoder-only training stage, if we did not freeze the trained layers the fine-tuning on a dataset with spurious correlations would “revert” the contrastive training, resulting in a large gap between worst-group and average error similar to ERM. We also ablate the balancing hyperparameter λ of CNC on CMNIST∗. In Table A.5 we find that CNC consistently achieves high worst-group accuracy across a wide range of λ ∈ [0.4, 0.9]. For reference, the next best methods GEORGE and JTT obtain 76.4% and 74.5% worst-group accuracy. B OMITTED PROOFS FROM SECTION 3.2 In this section, we prove that within any class, the gap between the worst-group error and the average error can be upper bounded by the alignment loss times the Lipschitz constant, plus another concentration error term. Proof of Theorem 3.1. Consider two arbitrary groups, denoted by g1 = (y, a1) and g2 = (y, a2), whose class labels are both y ∈ Y , whose spurious attributes are a1 ∈ A and a2 ∈ A such that a1 6= a2. Let G1 and G2 be the subset of training data that belong to groups g1 and g2, respectively. We note that both G1 and G2 are non-empty since we have assumed that (in Section 2) there is at least one sample from each group in the training data set. Let ng1 = |G1| and ng2 = |G2| be the size of these two groups, respectively. Recall that fenc denotes the mapping of the encoder layers of the full neural network model fθ. Since the classification layer fcls is a linear layer, we have used W to denote the weight matrix of this layer. Our definition of the cross-group alignment loss in equation (5), denoted as L̂align(fθ; y), implies that for g1 and g2, 1 ng1 1 ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 ‖fenc(x)− fenc(x′)‖2 ≤ L̂align(fθ; y). (8) Next, let E(x,y,a1)∼Pg1 [Lavg(Wfenc(x), y)] be the average loss conditioning on a data point being sampled from group g1 (and similarly for group g2). Let ∆(g1, g2) be the difference between the population average losses: ∆(g1, g2) = ∣∣∣∣∣ E(x,y,a1)∼Pg1 [Lavg(Wfenc(x), y]− E(x,y,a2)∼Pg2 [Lavg(Wfenc(x), y)] ∣∣∣∣∣. Recall that Gy ⊆ G is the set of groups that have class label y. Since the loss `(·) is bounded above by some fixed constant C2 according to our assumption, and is at least zero, by the Hoeffding’s inequality, the following result holds with probability at least 1− δ, for all |Gy| groups g ∈ Gy ,∣∣∣∣∣∣ E(x,y,a)∼Pg [Lavg(Wfenc(x), y)]− 1ng ∑ (x,y)∈(X,Y ) `(Wfenc(x), y) ∣∣∣∣∣∣ ≤ C2 √ 2 log (|Gy| /δ) ng . (9) Thus, with probability at least 1 − δ, the following holds for any g1 and g2 in class y (but having different spurious attributes) ∆(g1, g2) ≤ ∣∣∣∣∣∣ 1ng1 ∑ (x,y,a1)∈G1 Lavg(Wfenc(x), y)− 1 ng2 ∑ (x′,y,a2)∈G2 Lavg(Wfenc(x′), y) ∣∣∣∣∣∣ (10) + C2 (√ 2 log(|Gy| /δ) ng1 + √ 2 log(|Gy| /δ) ng2 ) . Next, we focus on the RHS of equation (10). First, equation (10) is also equal to the following:∣∣∣∣∣∣ 1ng1 1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 `(Wfenc(x), y))− 1 ng1 1 ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 `(Wfenc(x ′), y)) ∣∣∣∣∣∣ . Since we have also assumed that the loss function `(x, y) is C1-Lipschitz in x2, the above is at most:∣∣∣∣∣∣ 1ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 |`(Wfenc(x), y)− `(Wfenc(x′), y)| ∣∣∣∣∣∣ ≤ 1 ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 C1 · ‖Wfenc(x)−Wfenc(x′)‖2 (since y is the same for x, x′) ≤ B ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 C1 · ‖fenc(x)− fenc(x′)‖2 (because ‖W‖2 ≤ B as assumed) ≤B · C1 · L̂align(fθ; y). (because of equation (8)) 2In other words, we assume that |`(z, y)− `(z′, y)| ≤ C1 · ‖z − z′‖2, for any z, z′ and y. Thus, we have shown that for any g1 and g2 within class y, ∆(g1, g2) ≤ B · L̂align(fθ; y) + (√ 2 log(|Gy| /δ) ng1 + √ 2 log(|Gy| /δ) ng2 ) ≤ B · C1 · L̂align(fθ; y) + max g∈Gy C2 · √ 8 log(|Gy| /δ) ng . (11) Finally, we use the above result to bound the gap between the worst-group loss and the average loss. For every group g ∈ G, let pg denote the prior probability of observing a sample from P in this group. Let qy = ∑ g′∈Gy pg′ . Let h(g) be a short hand notation for h(g) = E (x,y,a)∼Pg [Lavg(Wfenc(x), y)] . The average loss among the groups with class label y is Lavg(fθ; y) = ∑ g∈Gy pg qy h(g). The worstgroup loss among the groups with class label y is Lwg(fθ; y) = maxg∈Gy h(g). Let g? be a group that incurs the highest loss among groups in Gy . We have Lwg(fθ; y)− Lavg(fθ; y) is equal to h(g?)− ∑ g∈Gy pg qy h(g) = ∑ g∈Gy pg qy (h(g?)− h(g)) (12) ≤ ∑ g∈Gy pg qy ∆(g?, g) (13) ≤B · C1 · L̂align(fθ; y) + max g∈Gy C2 · √ 8 log(|G| /δ) ng . (14) The last step uses equation (11) on ∆(g?, g) and the fact that qy = ∑ g′∈Gy pg′ . Thus, we have shown that the gap between the worst-group loss and the average loss among the groups with the same class label is bounded by the above equation. The proof is now complete. The astute reader will note that Theorem 3.1 focuses on comparing groups within the same class y, for any y ∈ Y . A natural follow-up question is what happens when comparing across groups with different labels. Let Lwg(fθ) = maxy∈Y Lwg(fθ; y) be the worst-group loss across all the labels. Recall that Lavg(fθ) is the average loss for the entire population of data. We generalize Theorem 3.1 to this setting in the following result. Corollary B.1 (Extension of Theorem 3.1 to compare across different classes). In the setting of Theorem 3.1, let qy = ∑ g∈Gy pg be the prior probability of observing a sample drawn from P with label y, for any y ∈ Y . We have that with probability at least 1− δ, the following holds: Lwg(fθ) ≤ ( min y∈Y qy )−1 Lavg(fθ) +B · C1 ·max y∈Y L̂align(fθ; y) + max g∈G C2 · √ 8 log(|G| /δ) ng . (15) Proof. We generalize the argument in the previous result to compare across different labels. The worst-group loss across different labels is max y∈Y max g∈Gy h(g) ≤max y∈Y ∑ g∈Gy pg qy h(g) +B · C1L̂align(fθ; y) + max g∈Gy C2 √ 8 log(|Gy| /δ) ng (because of equation (14)) ≤ 1 miny∈Y qy ∑ g∈Gy pgh(g) +B · C1 max y∈Y L̂align(fθ; y) + max g∈G C2 √ 8 log(|G| /δ) ng . Since ∑ g∈G pgh(g) = Lavg(fθ), we thus conclude that Lwg(fθ) ≤ ( min y∈Y qy )−1 Lavg(fθ) +B · C1 max y∈Y L̂align(fθ; y) + max g∈G C2 √ 8 log(|G| /δ) ng . The proof is now complete. An example showing that Corollary B.1 is tight. We describe a simple example in which the factor( miny∈Y qy )−1 in equation (15) is tight (asymptotically). Suppose there are k perfectly balanced classes so that qy = 1/k, for every y ∈ Y . There is one data point from each class, with loss equal to 0 for all except one of them. The worst-group loss is 1 whereas the average loss is 1/k. Thus, there is a factor of k between the worst-group loss and the average loss. For equation (15), the factor( min y∈Y qy )−1 = k, since qy = 1/k for every y ∈ Y in this example. Thus, this factor matches the (multiplicative) factor between the worst-group loss and the average loss in this example. C CONTRASTIVE ALGORITHM DESIGN DETAILS In this section, we provide further details on the training setup and contrastive batch sampling, pseudocode, and additional properties related to CNC’s implementation. C.1 TRAINING SETUP In Fig. 8, we illustrate the two training stages of Correct-N-Contrast described in Sec. 4. In Stage 1, we first train an ERM model with a cross-entropy loss. For consistency with Stage 2, we depict the output as a composition of the encoder and linear classifier layers. Then in Stage 2, we train a new model with the same architecture using contrastive batches sampled with the Stage 1 ERM model and a supervised contrastive loss (3) (which we compute after the depicted representations are first normalized) to update the encoder layers. Note that unlike prior work in contrastive learning (Chen et al., 2020; Khosla et al., 2020), as we have the class labels of the anchors, positives, and negatives, we also continue forward-passing the unnormalized representations (encoder layer outputs) and compute a cross-entropy loss to update the classifier layers while jointly training the encoder. 2048-D 2-D We also note that unlike prior work, we wish to learn invariances between anchors and positives that maximally reduce the presence of features not needed for classification. We thus do not pass the representations through an additional projection network (Chen et al., 2020). Instead, we use Eq. 3 to compute the supervised contrastive loss directly on the encoder outputs z = fenc(x). In Appendix A.4.2, we studied ablations with both design choices. C.2 TWO-SIDED CONTRASTIVE BATCH IMPLEMENTATION We provide more details on our default contrastive batch sampling approach described in Sec. 4. To recall, for additional contrastive signal per batch, we can double the pairwise comparisons in a training batch by switching the anchor and positive roles. This is similar to the NT-Xent loss in prior contrastive learning work (Chen et al., 2020). We switch the role of the anchor and first positive sampled in a contrastive batch, and sample additional positives and negatives using the same guidelines but adjusting for the “new” anchor. We denote this as “two-sided” sampling in contrast with the “one-sided” comparisons we get with just the original anchor, positives, and negatives. Implementing this sampling procedure in practice is simple. First, recall our initial setup with trained ERM model fθ̂, its predictions {ŷi} n i=1 on training data {(xi, yi)}ni=1 (where ŷi = fθ̂(xi)), and number of positives and negatives to sample M and N . We then sample batches with Algorithm 2. Because the initial anchors are then datapoints that the ERM model gets correct, under our heuristic we infer {xi}Mi=1 as samples from the majority group. Similarly the M positives {x+m}Mm=1 and N negatives {x−n }Nn=1 that it gets incorrect are inferred to belong to minority groups. For one batch, we then compute the full contrastive loss with L̂supcon(fenc) = L̂supcon ( x1, {x+m}Mm=1, {x−n }Nn=1; fenc ) + L̂supcon ( x+1 , {xi}Mi=1, {x′−n }Nn=1; fenc ) (16) where L̂supcon ( x1, {x+m}Mm=1, {x−n }Nn=1; fenc ) is given by: − 1 M M∑ m=1 log exp(z>1 z + m/τ)∑M m=1 exp(z > 1 z + m/τ) + ∑N n=1 exp(z > 1 z + n /τ) (17) Algorithm 2 Sampling two-sided contrastive batches Require: Number of positives M and number of negatives N to sample for each batch. 1: Initialize set of contrastive batches B = {} 2: for each xi ∈ {xi ∈ X : ŷi = yi} do 3: Sample M − 1 additional “anchors” to obtain {xi}Mi=1 from {xi ∈ X : ŷi = yi} 4: Sample M positives {x+m}Mm=1 from {x−m ∈ X : ŷ−m = ŷi, y−m 6= yi} 5: Sample N negatives {x−n }Nn=1 from {x−n ∈ X : ŷ−n = ŷi, y−n 6= yi} 6: Sample N negatives {x′−n }Nn=1 from {x′−n ∈ X : ŷ′−n = ŷ+1 , y′−n 6= y + 1 } 7: Update contrastive batch set: B ← B ∪ ( {xi}Mi=1, {x+m}Mm=1, {x−n }Nn=1, {x′−n }Nn=1 ) and again let z be the normalized output fenc(x) for corresponding x. We compute the cross-entropy component of the full loss for each x in the two-sided batch with its corresponding label y. D FURTHER RELATED WORK DISCUSSION We provide additional discussion of related work and connections to our work below. D.1 IMPROVING ROBUSTNESS TO SPURIOUS CORRELATIONS Our core objective is to improve model robustness to group or subpopulation distribution shifts that arise from the presence of spurious correlations, specifically for classification tasks. Because these learnable correlations hold for some but not all samples in a dataset, standard training with ERM may result in highly variable performance: a model that classifies datapoints based on spurious correlations does well for some subsets or “groups” of the data but not others. To improve model robustness and avoid learning spurious correlations, prior work introduces the goal to maximize worst-group accuracy (Sagawa et al., 2019). Related works broadly fall under two categories: Improving robustness with group information. If information such as spurious attribute labels is provided, one can divide the data into explicit groups as defined in Sec. 2, and then train to directly minimize the worst group-level error among these groups. This is done in group DRO (GDRO) (Sagawa et al., 2019), where the authors propose an online training algorithm that focuses training updates over datapoints from higher-loss groups. Goel et al. (2020) also adopt this approach with their method CycleGAN Augmented Model Patching (CAMEL). However, similar to our motivation, they argue that a stronger modeling goal should be placed on preventing a model from learning group-specific features. Their approach involves first training a CycleCAN (Zhu et al., 2017) to learn the data transformations from datapoints in one group to another that share the same class label. They then apply these transformations as data augmentations to different samples, intuitively generating new versions of the original samples that take on group-specific features. Finally they train a new model with a consistency regularization objective to learn invariant features between transformed samples and their sources. Unlike their consistency loss, we accomplish a similar objective to learn group-invariant features with contrastive learning. Our first training stage is also less expensive. Instead of training a CycleGAN and then using it to augment datapoints, we train a relatively simple standard ERM classification model, sometimes with only a few number of epochs, and use its predictions to identify pairs of datapoints to serve a similar purpose. Finally, unlike both CAMEL and GDRO, we do not require spurious attribute or group labels for each training datapoints. We can then apply CNC in less restrictive settings where such information is not known. Related to GDRO are methods that aim to optimize a "Pareto-fair" objective, more general than simply the worst-case group performance. Notable examples are the works of Balashankar et al. (2019) and Martinez et al. (2020). However, these approaches similarly do not directly optimize for good representation alignment (unlike our work). Improving robustness without training group information. More similar to our approach are methods that do not assume group information at training time, and only require validation set spurious attribute labels for fine-tuning. As validation sets are typically much smaller in size than training sets, an advantage of CNC and comparable methods is that we can improve the accessibility of robust training methods to a wider set of problems. One popular line of work is distributionally robust optimization (DRO), which trains models to minimize the worst loss within a ball centered around the observed distribution (Ben-Tal et al., 2013; Wiesemann et al., 2014; Duchi & Namkoong, 2019; Levy et al., 2020; Curi et al., 2020; Oren et al., 2019). This includes the CVaR DRO (Levy et al., 2020) method we evaluate against. However, prior work has shown that these approaches may be too pessimistic, optimizing not just for worst-group accuracy but worst possible accuracy within the distribution balls (Sagawa et al., 2019), or too undirected, optimizing for too many subpopulations, e.g. by first upweighting minority points but then upweighting majority points in later stages of training (Liu et al., 2021). Pezeshki et al. (2020) instead suggest that gradient starvation (GS), where neural networks only learn to capture statistically dominant features in the data (Combes et al., 2018), is the main culprit behind learning spurious correlations, and introduce a “spectral decoupling” regularizer to alleviate GS. However this does not directly prevent models from learning dependencies on spurious attributes. Similar to CAMEL, Taghanaki et al. (2021) propose Contrastive Input Morphing (CIM), an image dataset-specific method that aims to learn input feature transformations that remove the effects of spurious or task-irrelevant attributes. They do so without group labels, training a transformation network with a triplet loss to transform input images such that a given transformed image’s structural similarity metric (based on luminance, contrast, and structure (Wang et al., 2003)) is more similar to a “positive” image from the same class than a “negative” image from a different class. They then train a classifier on top of these representations. Instead of pixel-level similarity metrics, CNC enforces similarity in a neural network’s hidden-layer representations, allowing CNC to apply to non-image modalities. Additionally, we sample positives and negatives not just based on class label, but also the learned spurious correlations of an ERM model (via its trained predictions). We hypothesize that our sampling scheme, which intuitively provides "harder" positive and negative examples, allows CNC to more strongly overcome spurious correlations. Most similar to our approach are methods that first train an initial ERM model with the class labels as a way to identify data points belonging to minority groups, and subsequently train an additional model with greater emphasis on the estimated minority groups. Sohoni et al. (2020) demonstrate that even when only trained on the class labels, neural networks learn feature representations that can be clustered into groups of data exhibiting different spurious attributes. They use the resulting cluster labels as estimated group labels before running GDRO on these estimated groups. Meanwhile, Nam et al. (2020) train a pair of models, where one model minimizes a generalized cross-entropy loss (Zhang & Sabuncu, 2018), such that the datapoints this model classifies incorrectly largely correspond to those in the minority group. They then train the other model on the same data but upweight the minority-group-estimated points. While they interweave training of the biased and robust model, Liu et al. (2021) instead train one model first with a shortened training time (but the standard cross-entropy objective), and show that then upsampling the incorrect data points and training another model with ERM can yield higher worst-group accuracy. Creager et al. (2021) first train an ERM model, and then softly assign the training data into groups under which the initial trained ERM model would maximally violate the invariant risk minimization (IRM) objective. In particular, the IRM objective is maximally satisfied if a model’s optimal classifier is the same across groups (Arjovsky et al., 2019), and EIIL groups are inferred such that the initial ERM model’s representations exhibit maximum variance within each group. Finally, Nagarajan et al. (2020) provides a theoretical understanding of how ERM picks up spurious features under data set imbalance. They consider a setting involve a single spurious feature that is correlated with the class label and analyze the max-margin classifier in the presence of this spurious feature. In our work, we demonstrate that the ERM model’s predictions can be leveraged to not only estimate groups and train a new model with supervised learning but with different weightings. Instead, we can specifically identify pairs of points that a contrastive model can then learn invariant features between. Our core contribution comes from rethinking the objective with a contrastive loss that more directly reduces the model’s ability to learning spurious correlations. D.2 CONTRASTIVE LEARNING Our method also uses contrastive learning, a simple yet powerful framework for both self-supervised (Chen et al., 2020; Oord et al., 2018; Tian et al., 2019; Song & Ermon, 2020; Sermanet et al., 2018; Hassani & Khasahmadi, 2020; Robinson et al., 2021) and supervised (Khosla et al., 2020; Gunel et al., 2021) representation learning. The core idea is to learn data representations that maximize the similarity between a given input “anchor” and distinct different views of the same input (“positives”). Frequently this also involves contrasting positives with “negative” data samples without any assumed relation to the anchor (Bachman et al., 2019). Core components then include some way to source multiple views, e.g. with data transformations (Chen et al., 2020), and training objectives similar to noise contrastive estimation (Gutmann & Hyvärinen, 2010; Mnih & Kavukcuoglu, 2013). An important component of contrastive learning is the method by which appropriate positives and negatives are gathered. For sampling positives, Chen et al. (2020) show that certain data augmentations (e.g. crops and cutouts) may be more beneficial than others (e.g. Gaussian noise and Sobel filtering) when generating anchors and positives for unsupervised contrastive learning. von Kügelgen et al. (2021) theoretically study how data augmentations help contrastive models learn core content attributes which are invariant to different observed “style changes”. They propose a latent variable model for self-supervised learning. Tian et al. (2020) further study what makes good views for contrastive learning. They propose an “InfoMin principle”, where anchors and positives should share the least information necessary for the contrastive model to do well on the downstream task. For sampling negatives, Robinson et al. (2021) show that contrastive learning also benefits from using “hard” negatives, which (1) are actually a different class from the anchor (which they approximate in the unsupervised setting) and (2) embed closest to the anchor under the encoder’s current data representation. Both of these approaches capture the principle that if positives are always too similar to the anchor and negatives are always too different, then contrastive learning may be inefficient at learning generalizable representations of the underlying classes. In our work, we incorporate this principle by sampling data points with the same class label but different ERM predictions–presumably because of spurious attribute differences–as anchor and positive views, while sampling negatives from data points with different class labels but the same ERM prediction as the anchor. The anchors and positives are different enough that a trained ERM model predicted them differently, while the anchors and negatives are similar enough that the trained ERM model predicted them the same. Contrasting the above then allows us to exploit both “hard” positive and negative criteria for our downstream classification task. In Appendix A.3, we show that removing this ERM-guided sampling (i.e. only sampling positives and negatives based on class information), as well as trying different negative sampling procedures, leads to substantially lower worst-group accuracy with CNC. One limitation of our current theoretical analysis regarding the alignment loss (cf. Section 3.2) is that we require knowing the group labels to compute the RHS of equation (6) (in particular, the alignment loss). An interesting question for future work is to provide a better theoretical understanding of the alignment induced by CNC in the context of spurious correlations. D.3 LEARNING INVARIANT REPRESENTATIONS Our work is also similar in motivation to Invariant Risk Minimization (IRM) (Arjovsky et al., 2019), Predictive Group Invariance (PGI) (Ahmed et al., 2021), and other related works in domain-invariant learning (Krueger et al., 2020; Parascandolo et al., 2020; Ahuja et al., 2020; Creager et al., 2021). These methods aim to train models that learn a single invariant representation that is consistently optimal (e.g. with respect to classifying data) across different domains or environments. These environments can be thought of as data groups, and while traditionally methods such as IRM require that environment labels are known, recent approaches such as Environment Inference for Invariant Learning (EIIL) (Creager et al., 2021) and Predictive Group Invariance (PGI) (Ahmed et al., 2021) similarly aim to infer environments with an initial ERM model. In EIIL, they next train a more robust model with an invariant learning objective, similarly selecting models based on the worst-group error on the validation set. However, they train this model using IRM or Group DRO with the inferred environments as group labels
1. What is the main contribution of the paper regarding training classifiers with strong worst-case performance across groups? 2. What are the strengths of the proposed method, particularly its ease of implementation and effectiveness? 3. How does the reviewer assess the novelty of the method compared to prior work? 4. What are the weaknesses or limitations of the paper, such as the lack of comparison with alternative approaches? 5. Are there any questions or concerns regarding the method's ability to handle spurious attributes or incorporate them into the positive and negative sampling methods?
Summary Of The Paper Review
Summary Of The Paper This paper considers the goal of training classifiers that achieve strong worst-case performance across groups with different spurious features, without assuming access to supervision based on the spurious features. The paper proposes a two-step method, which fist trains a model with ERM, then re-trains the model using a supervised contrastive approach where positive and negative samples are selected on the basis of misclassification characteristics of the ERM model. The method addresses an important problem, and is well motivated & evaluated. In all this is a nice piece of work and I am cautiously happy to recommend its acceptance at this point. I do, however, have a couple of questions I would like to see some answers to (see below). Depending on the answers to these questions, and discussion with other reviewers, I am happy to consider raising my score. Review This paper takes a representation focused perspective on avoiding learning spurious corrections. The main motivation (well backed by experiments on toy datasets) is this: there is a positive correlation between within-class alignment loss, and worst-group error. On its own this observation isn’t hugely surprising, and arguably overlaps with observations in prior work. The novelty of this work comes from using this observation as the inspiration for CnC, a supervised contrastive approach to re-training the ERM model to reduce the within-class alignment loss. Some positives: The paper is well written The method is easy to implement, intuitive, and seems to work pretty well. In section 3 the combination of empirical observation and theoretical bound make the conclusion quite convincing. The method specifically samples hard positive/negative samples. This is potentially an even more important point on the novelty of the method than is currently emphasized [see the second main question below, about SupCon as a baseline]. The logic and ideas in this paper are very linear, making it easy to quickly grasp the main takeaways. The worst-group performance seems to decay slightly better than JTT as the level of spurious correlation increases (Fig. 7). (For weaknesses, see the questions below). Questions: I have two major questions that I am hoping to see answers to: First: The motivation for CnC is centered on the class-conditional alignment loss. There is even a bound on the worst-group loss in terms of the average-group and class-conditional alignment loss. So why not replace step 2, and instead fine-tune the model using L_avg_group + L_alignment? Or even just train models from scratch with this loss. It would be good to compare to these. If CnC is simply more empirically successful than this alternative, then it would be good to see this. Second: In a related vein to the previous question, how much is CnC buying us as compared to the usual supervised contrastive training? It would be good to see SupCon as a baseline in Table 1. This seems an important baseline, since the main idea of CnC is to pull items from the same class together in feature space, which is also done using SupCon. The main (even only?) difference is the hard positive/negative sampling approach of CnC. Miscellaneous comments and questions: These are just a few things I was curious about. I am not per se asking for any response from the authors, but offer them up in the spirit of constructive feedback: What if you iterate your method? That is, CnC samples positives and negatives according the fixed ERM model from step 1. What if you repeat step 2 again using the new and improved model obtained from the first step 2 run? Maybe it would just immediately saturate in performance, but I am curious. Is there a way to incorporate spurious attribute information into the positive and negative sampling methods if it were available? Do you have a rationale as to why CnC did worse than JTT for CivilComments? I’m not bothered at all by this result, since I would never ask for across-the-board improved empirical results. But I am curious at to whether any lessons can be learned about the relative strengths and weaknesses of the two methods. I notice that CivilComments has more (8) spurious feature values than the other datasets - could this be related? Perhaps consider using more divergent colors in Fig 6. The different shades appear fine on my compute screen, but are hard to distinguish on a printout (maybe my printer is just bad…).
ICLR
Title Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations Abstract Spurious correlations pose a fundamental challenge for building robust machine learning models. For example, models trained with empirical risk minimization (ERM) may depend on correlations between class labels and spurious features to classify data, even if these relations only hold for certain data groups. This can result in poor performance on other groups that do not exhibit such relations. When group information is available during training, Sagawa et al. (2019) have shown how to improve worst-group performance by optimizing the worst-group loss (GDRO). However, when group information is unavailable, improving worst-group performance is more challenging. For this latter setting, we propose Correct-NContrast (CNC), a contrastive learning method to train models more robust to spurious correlations. Our motivating observation is that worst-group performance is related to a representation alignment loss, which measures the distance in feature space between different groups within each class. We prove that the gap between worst-group and average loss for each class is upper bounded by the alignment loss for that class. Thus, CNC aims to improve representation alignment via contrastive learning. First, CNC uses an ERM model to infer the group information. Second, with a careful sampling scheme, CNC trains a contrastive model to encourage similar representations for groups in the same class. We show that CNC significantly improves worst-group accuracy over existing state-of-the-art methods on popular benchmarks, e.g., achieving 7.7% absolute lift in worst-group accuracy on the CelebA data set, and performs almost as well as GDRO trained with group labels. CNC also learns better-aligned representations between different groups in each class, reducing the alignment loss substantially compared to prior methods. 1 INTRODUCTION For many tasks, deep neural networks are negatively affected by spurious correlations—dependencies between observed features and class labels that only hold for certain groups of the data. For example, consider classifying images of cows or camels, where 90% of cow images depict grassy backgrounds. A model may learn to predict the “cow” class based on the background, and incorrectly classify cow images with non-grass backgrounds as camels (Ribeiro et al., 2016; Beery et al., 2018; Kaufman et al., 2012). This illustrates a widespread issue where neural networks can achieve low test error on certain groups, yet high error on others (Blodgett et al., 2016; Buolamwini & Gebru, 2018; Hashimoto et al., 2018; Sagawa et al., 2019). Prior works have shown that this problem is increasingly aggravated as the correlations between class labels and spurious features become stronger (Sagawa et al., 2020) and easier to learn (Arpit et al., 2017; Hermann & Lampinen, 2020). Since spurious correlations arise in many settings, we wish to design robust methods that perform well on all groups. How can we obtain neural networks robust to spurious correlations? If group-defining information (i.e. spurious attributes) is known, a common solution is to minimize the worst-group loss, e.g., with group DRO (GDRO) (Sagawa et al., 2019). However, such information may be expensive to collect, and we may not know the spurious attributes a priori in a given data set (Oakden-Rayner et al., 2020). When group information is unavailable, prior works typically take a two-stage approach. They first train an ERM model, and then use this model to infer groups and train a more robust model. For example, Sohoni et al. (2020) find that ERM models still learn group-specific features when trained to predict class labels. After first training an ERM model, they infer groups by clustering the ERM Under review as a conference paper at ICLR 2022 Landbird, Land BG ✅ ✅ ✅ ✅ ERM Ours Landbird, Water BG Landbird, Land BG Waterbird, Land BG Waterbird, Water BG Landbird, Water BG Landbird, Land BG Waterbird, Land BG Waterbird, Water BG ❎ ✅❎✅ ERM ✓ ✗ GradCAM Correct? Correct? “Waterbird” “Landbird”“Waterbird” “Waterbird” “Landbird” “Waterbird” ✗ Sample contrastive batches “Landbird” “Waterbird” “Waterbird” model’s representations, and train a new model with GDRO using these inferred groups. Creager et al. (2021) identify groups under which an initial trained ERM model would maximally violate the invariant risk minimization (IRM) objective (Arjovsky et al., 2019). With these groups they train a new model with GDRO or IRM. Nam et al. (2020); Liu et l. (2021) observe that ERM models often misclassify data points in minority groups, and thus train another model with re-weighted or upsampled points misclassified by an initial ERM model. While these methods promisingly leverage ERM learned biases to significantly improve worst-group error without training group labels, there is still a gap between their robust performance and methods’ such as GDRO that use group labels. In this work, we ask how else we can improve model robustness using a trained ERM model, and aim to close this gap by focusing on improving the learned representations of the robust model in the second stage. We support this direction with two key motivations. First, we find that higher worst-group performance consistently correlates with hidden-layer representations exhibiting higher dependence on class labels than spurious attributes. We quantify this correlation using geometric representation alignment (Wang & Isola, 2020), which measures the closeness of samples with the same class but different spurious attributes in the model feature space, and mutual information. This relation consistently holds across various data sets, and explains when prior upweighting methods improve worst-group error over ERM (Fig. 4). Second, we theoretically show that a model’s representation alignment for a given class can be used to upper bound the gap between its worst-group and average loss for that class. Thus, if we can improve representation alignment for a class, we can reduce the gap between worst-group and average loss for that class. We thus propose Correct-N-Contrast (CNC), a two-stage procedure using contrastive learning to encourage better representation alignment within each class. In the first stage, we train a regularized ERM model similar to prior work (Liu et al., 2021; Creager et al., 2021), under the premise that ERM predictions help infer group information (i.e., spurious attributes). In the second stage, we wish to improve representation alignment by “pulling together” same-class datapoints and “pushing apart” different-class datapoints, regardless of their individual groups or spurious features. To do so via supervised contrastive learning, we use the heuristic that samples with the same ERM predictions exhibit similar spurious features (and vice versa). With a randomly sampled anchor, we select samples with the same class but different ERM predictions as “positives” we want to pull together, and samples from different classes but the same ERM prediction as hard “negatives” we want to push apart. Training a second model with this sampling scheme and supervised contrastive learning encourages this model to ignore spurious correlations that the initial ERM model learned, and improves representation alignment between same-class data points. Thus, CNC corrects for the ERM model’s mistakes with contrastive learning in the second model. We evaluate CNC on four popular and diverse spurious correlation benchmarks. Among methods that similarly do not assume training group labels, CNC substantially improves worst-group accuracy, obtaining up to 7.7% absolute lift (from 81.1% to 88.8% on CelebA) over the prior state-of-the-art JTT (Liu et al., 2021), and averaging 3.4% lift across the four tasks. We also find that CNC nearly closes the gap in worst-group accuracy with robust training methods that assume training group labels, only falling short of GDRO’s worst-group accuracy by 0.8% absolute. Finally, we validate that CNC indeed reduces the alignment loss compared to prior methods. This corresponds to an up to 71.1% smaller gap between worst-group versus average accuracy for data points in the same class. Contributions. We summarize our contributions as follows: 1. We empirically show that a model’s worst-group performance correlates with the model’s alignment loss between different groups within a class, and analyze this connection theoretically. 2. We propose CNC, a two-stage contrastive approach to improve representation alignment and thereby learn representations robust to spurious correlations. 3. We validate that CNC significantly improves worst-group accuracy over existing methods on various benchmarks, and learns better-aligned representations less reliant on spurious features. 2 PRELIMINARIES Problem setup. We present our setting and the loss objectives following Sagawa et al. (2019). Let X = {x1, . . . , xn} and Y = {y1, . . . , yn} be a training data set of size n. Each data point has an observed feature vector xi ∈ X , label yi ∈ Y , and unobserved spurious attribute ai ∈ A. The set of groups G is defined as the set of all combinations of class label and spurious attribute pairs, i.e. G = Y ×A. Let C = |Y| be the number of classes and K = |G| be the number of groups. Following the classical supervised learning setting, we assume that each example (xi, yi, ai) is drawn from an unknown joint distribution P . We assume that at least one sample from each group is observed in the training data. Let Pg be the distribution conditioning on (y, a) = g, for any g ∈ G. Given a model fθ : X 7→ RC and a convex loss ` : X × Y 7→ R, let the worst-group loss be: Lwg(fθ) := max g∈G E(x,y,a)∼Pg [`(fθ(x), y)]. (1) ERM minimizes the training loss as a surrogate for the expected population loss Lavg: Lavg(fθ) := E(x,y,a)∼P [`(fθ(x), y)] (2) While ERM is the standard way to train neural nets, spurious correlations often cause ERM to obtain high error on minority groups even when average error is low. Group DRO, which minimizes the empirical version of (1), is recognized as a strong baseline for improving worst-group error when the group labels {a1, . . . , an} are available during training (Sagawa et al., 2019). In contrast, we focus on the more challenging setting in which the group labels are not available during training. Contrastive learning. We briefly describe contrastive learning (Chen et al., 2020), a central component of our approach. Let fθ be a neural network model with parameters θ. Let the encoder fenc : X 7→ Rd be the feature representation layers of fθ. Let fcls : Rd 7→ RC be the classification layer of fθ, which maps encoder representations to one-hot label vectors. We learn fenc with the supervised contrastive loss Lsupcon proposed in Khosla et al. (2020). For each anchor x, we sample M positives {x+i }Mi=1 and N negatives {x − i }Ni=1. Let y, {y + i }Mi=1, {y − i }Ni=1 be the labels and z, {z+i }Mi=1, {z − i }Ni=1 be the normalized outputs of fenc(x) for the anchor, positives, and negatives respectively. With input x mapped to z, the training objective for the encoder is to minimize: Lsupcon(x; fenc) = E x,{x+i }Mi=1,{x − i }Nj=1 [ − log exp(z >z+i /τ)∑M m=1 exp(z >z+m/τ) + ∑N n=1 exp(z >z−n /τ) ] (3) where τ > 0 is a scalar temperature hyperparameter. Minimizing Eq. 3 leads to z being closer to z+ than z− in feature space. See Sec. 6 for further references related to contrastive learning. 3 MOTIVATIONS FOR REPRESENTATION ALIGNMENT To motivate our method, we present our core observation that a model’s worst-group accuracy correlates with how well its learned representations depends on the class labels, but not the spurious attributes. First, we empirically observe that ERM learns spurious correlations by inspecting their hidden layer representations on several spuriously correlated data sets. We find that ERM’s worstgroup performance is inversely related to a cross-group alignment loss (cf. Eq. (4) below) and mutual information metrics. Second, we theoretically prove that this alignment loss serves as an upper bound on the gap between the average-group loss and the worst-group loss (cf. Theorem 3.1). 3.1 RELATING WORST-GROUP PERFORMANCE TO REPRESENTATION ALIGNMENT We first show that when neural networks are trained with standard ERM on spuriously correlated data, their hidden layer representations exhibit high dependence on the spurious attribute. We quantify this behavior using representation alignment (cf. Eq. (4) below) and mutual information metrics. We observe that these metrics explain trends in ERM’s worst-group accuracy on various spuriously correlated data sets. This relationship is also consistent and applies to upsampling methods (JTT) that mitigate the impact of spurious features (Liu et al., 2021). We model spurious correlations with CMNIST∗, a colored MNIST data set inspired by Arjovsky et al. (2019). There are 5 digit classes and 5 colors. We color a fraction pcorr of the training samples with a color a associated with each class y, and color the test samples uniform-randomly. To analyze learned representations, we train a LeNet-5 CNN (LeCun et al., 1989) with ERM to predict digit classes, and inspect the outputs of the last hidden layer z = fenc(x). As shown in Fig. 2, with low pcorr, models learn representations with high dependence on the actual digit classes. However, with high pcorr we learn z highly dependent on a, despite only training to predict y. Representation metrics. To quantify this behavior, we use two metrics designed to capture how well the learned representations exhibit dependence on the class label vs. the spurious attributes. First, we compute an alignment loss L̂align(fenc; g, g′) between two groups g = (y, a) and g′ = (y, a′) where a 6= a′. This measures how well fenc maps samples with the same class, but different spurious attributes, to nearby vectors via Euclidean distance. Letting G and G′ be the subsets of training data in groups g and g′ respectively, and x and x′ be any two samples in G and G′, we define: L̂align(fenc; g, g′) := 1 |G| 1 |G′| ∑ (x,y,a)∈G ∑ (x′,y,a′)∈G′ ‖fenc(x)− fenc(x′)‖2. (4) Thus, lower L̂align means better alignment. We also quantify representation dependence by estimating the mutual information (MI) of a model’s learned representations with the class label, i.e. Î(Y ;Z) and the spurious attributes Î(A;Z). We defer computational details to Appendix E. (b) (e) (f) (g) (h) Results for ERM. In Fig. 3 we show a strong association between worst-group error and both alignment and mutual information metrics. As pcorr increases, ERM models not only drop in worst-group accuracy, but also incur higher alignment loss (Fig. 3ab). Fig. 3c further illustrates this with mutual information. We plot the estimated mutual information and worst-group accuracy for models at each epoch. A substantial drop in worst-group accuracy occurs with high Î(A;Z) (especially when Î(A;Z) > Î(Y ;Z), even with high Î(Y ;Z)). Fig. 3d also captures this trend with a trade off between high Î(Y ;Z) with Î(A;Z) as pcorr increases (Fig. 3a). 4 Results for JTT. In Fig. 4, we also show that this relation holds when training with another recent (upsampling) approach, JTT (Liu et al., 2021). With high pcorr, models now achieve higher worstgroup accuracy, and this corresponds to learning representations with high class label and low spurious attribute dependence. We note however that previous approaches do not explicitly optimize for these representation metrics, suggesting a new direction to improve worst-group performance. 3.2 RELATING ALIGNMENT LOSS TO WORST-GROUP LOSS The empirical observations in Fig. 3 suggest that lower alignment loss correlates with lower worstgroup error. Next, we show that this connection applies much more generally. We show that the maximum of L̂align(fenc; g, g′), over any two groups g, g′ within the same class, can be used to upper bound the gap between the worst-group loss and average loss for that class. We set up several notations before stating the result. For any class label y ∈ Y , let Gy be the set of groups with label y in G. Let Lwg(fθ; y) be the worst-group loss among groups in Gy: Lwg(fθ; y) := max g∈Gy E (x,ỹ,a)∼Pg [`(fθ(x), ỹ)] . Let Lavg(fθ; y) be the average loss among groups in Gy: Lavg(fθ; y) := E (x,ỹ,a)∼P :∀a∈A [`(fθ(x), ỹ)] . Additionally, we define a class-specific alignment loss L̂align(fenc; y) among groups in Gy . Recall that fθ involves an encoding function fenc and a linear classification layer fcls. We define L̂align(fenc; y) as the largest cross-group alignment loss among groups in Gy: L̂align(fθ; y) := max g∈Gy,g′∈Gy : g 6=g′ L̂align(fenc; g, g′). (5) where L̂align(fenc; g, g′) is the alignment loss between g and g′ defined in Eq. (4). Our main result is that L̂align(fθ; y) is an upper bound on the gap between Lwg(fθ; y) and Lavg(fθ; y) (up to a norm multiplier and a concentration error), for any y ∈ Y . Theorem 3.1 (Alignment loss upper bounds the gap between worst-group and average-group loss). In the setting described above, let fθ be any neural network satisfying that the weight matrix of the linear classification layer W in fcls satisfies that ‖W‖2 ≤ B, for some constant B. Let ng be the size of any group g ∈ G in the training data set. Assume that the loss function `(x, y) is C1-Lipschitz in x and bounded from above by C2, for some positive constants C1, C2. Then, with probability at least 1− δ over the randomness of the training data set samples, for any class y ∈ Y , the following holds: Lwg(fθ; y) ≤ Lavg(fθ; y) +B · C1 · L̂align(fθ; y) + max g∈Gy C2 √ 8 log(|Gy|/δ) ng . (6) The proof of Theorem 3.1 is deferred to Sec. B. Since we also know that Lavg(fθ; y) ≤ Lwg(fθ; y), the above result implies that in order to reduce the gap between the worst-group loss and the average loss for class y, it suffices to reduce the alignment loss L̂align(fθ; y). Broader algorithmic implications. We summarize Section 3 with two takeaways: (1) When trained on spuriously correlated data sets, ERM networks learn data representations highly dependent on spurious attributes. Clusters of these representations (Sohoni et al., 2020) or the ERM model’s outputs (Liu et al., 2021; Nam et al., 2020) can thus serve as (noisy) pseudolabels for spurious attributes. (2) Both representation metrics correlate with worst-group error, such that a viable way to improve worst-group performance is to improve representation alignment within each class. 4 CORRECT-N-CONTRAST (CNC) We now present CNC, a two-stage method to improve worst-group performance and robustness to spurious correlations, without requiring training group labels. Similar to prior works (Sohoni et al., 2020; Liu et al., 2021), our first stage trains an ERM model (with proper regularization1) on the training set, ultimately to infer group labels based on samples’ spurious attributes. 1As we train on the same data set we infer the groups on, regularization (via high weight decay or early stopping) is purely to prevent the ERM model from memorizing the class labels. This is standard practice also discussed in Sohoni et al. (2020); Liu et al. (2021). We show in Sec. 5.3 that we do not require the ERM model to perfectly learn the spurious attributes for CNC to substantially improve robustness in practice. Algorithm 1 Correct-N-Contrast (CNC) Input: Training data set (X,Y ); # positives M ; # negatives N ; learning rate η, # epochs K. Stage 1: ERM Training 1: Train a regularized ERM model fθ̂ on (X,Y ); save the predictions ŷi := fθ̂(xi). Stage 2: Supervised contrastive learning 2: for each epoch 1, . . . ,K do 3: for each anchor (x, y) ∈ (X,Y ) do 4: Let ŷ be the predicted (group) label of x from Stage 1’s ERM model. 5: Get M positives {(x+m, y+m)} where y+m = y but ŷ+m 6= ŷ, for m = 1, . . . ,M . 6: Get N negatives {(x−q , y−q )} where y−q 6= y but ŷ−q = ŷ, for q = 1, . . . , N . 7: Update fθ by θ ← θ − η · ∇L̂(fθ;x, y) (cf. Eq. (7)) with anchor, M positives, and N negatives. return final model fθ from Stage 2, and throw away the ERM model from Stage 1. The key difference is our second stage: we aim to train a more robust model by learning representations such that samples in the same class but different groups are close to each other. We use contrastive learning, as intuitively by treating samples with the same class but different spurious attributes as distinct “views” of the same class, we train the second stage model to “pull together” these samples’ representations and ignore the different spurious features. This is also inspired by Wang & Isola (2020); Robinson et al. (2021), who show that minimizing the contrastive loss improves representation alignment between distinct “views”. Later in Sec. 5.1, we verify that CNC indeed reduces L̂align(fθ; y) substantially. We include further details on both stages below, and summarize CNC in Algorithm 1. Stage 1: ERM training. We train an initial model fθ̂ on the training data set {(xi, yi)} n i=1 with ERM and regularization, and save its predictions {ŷi}ni=1 on the training data points. We consider two ways to source predictions: using the ERM model’s outputs, and clustering its last hidden-layer representations. Both approaches aim to accomplish the same goal of exploiting the ERM model’s learned spurious correlations; further details are in Appendix E.2. Stage 2: Contrastive learning (CL). Next, we train a robust model with supervised contrastive learning using the ERM predictions. While CNC is inspired by recent CL works (Chen et al., 2020; Khosla et al., 2020), we introduce new “contrastive batch” sampling and optimization objectives. Contrastive batch sampling. As described in Sec. 2, contrastive learning requires sampling anchors, positives, and negatives with the general form {x}, {x+}, {x−}. Here, we wish to sample points such that by maximizing the similarity between anchors and positives (and keeping anchors and negatives apart), the Stage 2 model “ignores” spurious similarities while learning class-consistent dependencies. With prediction set {ŷi}ni=1, for each batch we randomly sample an anchor xi ∈ X (with label yi and ERM prediction ŷi), M positives with the same class as yi but a different ERM model prediction than ŷi, and N negatives with different classes as yi but the same ERM model prediction as ŷi. For more signal per batch, we double pairwise comparisons by switching anchor and positive roles. Optimization objective and updating procedure. While our core objective is to learn aligned representations via contrastive learning, we also wish to train the full model to classify datapoints correctly. As we have the training class labels, we jointly update both the model’s encoder layers fenc with a standard contrastive loss, and the full model fθ with a cross-entropy loss: L̂(fθ;x, y) = λL̂supcon(fenc;x, y) + (1− λ)L̂cross(fθ;x, y). (7) In the above, L̂supcon(fenc;x, y) is the supervised contrastive loss of x along with its positive and negative samples, similar to Eq. (3) (see Eq. (16) in Sec. C.2 for the full equation); L̂cross(fθ;x, y) is averaged cross-entropy loss over x, the M positives, and the N negatives; λ ∈ [0, 1] is a balancing hyperparameter. As a remark, the loss objective (7) uses a single anchor in each batch in our setting. To calculate the loss, we first forward propagate one batch ( xi, {x+m}Mm=1, {x−q }Nq=1 ) through fenc and normalize them to obtain representation vectors ( zi, {z+m}Mm=1, {z−q }Nq=1 ) . To learn closely aligned zi and z+ for all {z+m}Mm=1, we update fenc with the L̂ sup out (x; fenc) loss. Finally, we also pass the unnormalized outputs of the encoder fenc to the classifier layers fcls, and compute a batch-wise cross-entropy loss L̂cross(fθ) using each batch sample’s class labels and fθ’s outputs. Due to space constraints, we include further implementation details and sampling considerations in Appendix C. 5 EXPERIMENTAL RESULTS We conduct experiments to answer the following questions: (1) Does CNC improve worst-group performance over prior state-of-the-art methods on data sets with spurious correlations? (2) Does CNC actually encourage learning hidden layer representations with greater alignment and class-labelonly dependence? How is this impacted by the strength of a spurious correlation in the data? (3) Does CNC require perfectly predicting the spurious attribute to work well in practice? Our results for each question follows in the next three subsections (5.1, 5.2, and 5.3). Due to space constraints, we defer ablations on CNC’s design choices, including the representation-learning objective and sampling procedure, to Appendix A. Additional comparison to alignment methods proposed for domain adaptation but adjusted for our setting are in Appendix A.2. Below, we briefly describe the benchmark data sets used in this section. We run CMNIST∗ with pcorr = 0.995. Further details on data sets, models, and experimental hyperparameters are deferred to Appendix E. Waterbirds (Sagawa et al., 2019): We classify Y = {waterbird, landbird}, where 95% of images have the same bird type and background A = {water background, land background}. CelebA (Liu et al., 2015): We classify celebrities’ hair colorY = {blond, not blond}withA = {male, female}. Only 6% of blond celebrities in the data set are male. CivilComments-WILDS (Borkan et al., 2019; Koh et al., 2021): We classify Y = {toxic, not toxic} comments. A denotes whether the comment mentions one of eight demographic identities. 5.1 CNC IMPROVES WORST-GROUP PERFORMANCE To study (1), we evaluate CNC on image classification and NLP data sets with spurious correlations. As baselines, we compare against standard ERM and an oracle GDRO approach that assumes access to the group labels. We also compare against recent methods that tackle spurious correlations without requiring group labels: CVaR DRO (Levy et al., 2020), GEORGE (Sohoni et al., 2020), Learning from Failure (LfF) (Nam et al., 2020), Predictive Group Invariance (PGI) (Ahmed et al., 2021), Environment Inference for Invariant Learning (EIIL) (Creager et al., 2021), Contrastive Input Morphing (CIM) (Taghanaki et al., 2021), and Just Train Twice (JTT) (Liu et al., 2021). We also compare against a CNC version without the Stage 1 ERM model, instead only sampling positives and negatives based on class (denoting this SupCon*). Results are reported in Table 1. CNC achieves highest worst-group accuracy among all methods without training group labels on the CMNIST∗ Waterbirds and CelebA data sets, while also obtaining near-SoTA worst-group accuracy on CivilComments. While LfF, GEORGE, PGI, EIIL, and JTT similarly use a trained ERM model to estimate groups, CNC uniquely uses ERM predictions to encourage the robust model to learn desirable representations via contrastive learning. We reason that with this approach, by sampling positives and negatives from the ERM predictions, CNC more directly encourages the robust model to ignore learnable spurious correlations compared to previous invariant learning, input transformation, or upweighting approaches. We include additional evidence of this via GradCAM visualizations in Appendix G. 5.2 CNC LEARNS REPRESENTATIONS LESS RELIANT ON SPURIOUS FEATURES To shed light on CNC’s worst-group accuracy gains, we investigate if models trained with CNC actually learn representations with higher alignment. Compared to ERM and JTT (the next-best performing method that does not require subgroup labels), CNC learns representations with significantly higher alignment (lower alignment loss) and lower mutual information with spurious attributes (while having comparable mutual information with class labels) (Fig. 5 and Fig. 7). We find that CNC representations exhibit the lowest alignment loss consistently for these data sets; this also corresponds to CNC models achieving the highest worst-group accuracy. Furthermore, while all methods result in representations that exhibit high mutual information with the class label (Fig. 5b), only CNC results in representations that drastically reduce mutual information with spurious attributes (Fig. 5c). In Fig. 6, we also illustrate this result on the Waterbirds data set via UMAP visualizations of the learned representations. Notably, all training methods result in representations separable by class label. Yet ERM models exhibit strong separability by spurious attributes, and JTT models interestingly also still depict some learned dependency on the spurious attribute. However, CNC uniquely learns representations that strongly depict class-label-only dependence. In addition, to study how this relation between representation metrics and worst-group accuracy scales with the strength of the spurious correlation, we compute representation metrics with CNC, ERM, and JTT models trained on increasingly spurious (↑ pcorr) CMNIST∗ data sets in Fig. 7. We observe that with high spurious correlations, ERM fails to classify digits in the minority classes, while CNC and JTT comparably maintain high worst-group accuracy. CNC also performs better in more spurious settings (pcorr > 0.95). These improvements over ERM are reflected by drops in alignment loss (averaged over classes); CNC consistently achieves lowest such loss. Fig. 7c shows that CNC’s learned representations maintain a more favorable balance of mutual information between the class label and spurious attribute than JTT. While JTT models exhibit slightly higher estimated I(Y ;Z) than CNC models, CNC models exhibit much lower dependence on the spurious attribute. 5.3 UNDERSTANDING CNC’S SENSITIVITY TO STAGE 1 PREDICTIONS Finally, we study how sensitive CNC is to how closely the Stage 1 ERM model actually predicts the spurious attribute. As JTT also relies on an initial ERM model’s predictions, we compare CNC to JTT in this regard. We find that CNC is more robust to noisy ERM predictions than JTT, and that CNC does not require perfectly inferred groups to perform well. We first conduct an ablation on CNC and JTT’s worst-group and average performance in Fig. 7d with the following synthetic experiment. On CMNIST∗, we start with the true spurious attribute labels as the Stage 1 “predictions". We then gradually degrade their quality as follows: for each point, with 6 RELATED WORK We build on prior work in group robustness and contrastive learning. Further discussion is in App. D. Robustness to group shift. A variety of approaches aim to improve performance on minority data groups. If group labels are known, many works minimize a rebalanced error similar in motivation to correcting class imbalance (He & Garcia, 2009; Cui et al., 2019) or importance weighting (Shimodaira, 2000; Byrd & Lipton, 2019). More recently, Sagawa et al. (2019) minimize worst-group loss during training. Goel et al. (2020) achieve further lift by synthetically generating additional minority group points. Cao et al. (2019) regularize updates on minority groups to improve their generalization. Another line of work aims to improve group robustness without assuming group labels for the training data. The most similar methods to CNC first train an initial ERM model with class labels as a way to infer groups, and then use these groups to train a second model with better worst-group performance. GEORGE (Sohoni et al., 2020) clusters ERM representations, and runs GDRO with these clusters as inferred groups. EIIL (Creager et al., 2021) and PGI (Ahmed et al., 2021) infer groups that maximally violate an invariance objective for the ERM model. With these groups EIIL uses either GDRO or Invariant Risk Minimization (Arjovsky et al., 2019) to train a second robust model, while PGI minimizes the KL divergence of the softmaxed logits for samples in the same class but different groups. LfF (Nam et al., 2020) use a generalized cross-entropy loss to encourage misclassifying minority groups, concurrently training a second model with these datapoints upweighted. JTT (Liu et al., 2021) trains via ERM for a few epochs, before training a second ERM model with incorrect datapoints upsampled. For image data sets, CIM (Taghanaki et al., 2021) trains a transformation network to remove potentially spurious attributes from input features. Contrastive learning (CL). CL works by predicting whether two inputs are “similar” or “dissimilar” (Le-Khac et al., 2020). This involves specifying batches of anchor and positive datapoints similar to each other (as different “views” of the same source or input), and negatives depicting dissimilar points. An encoder is trained to simultaneously maximize the similarity between the feature representations of anchors and positives, and minimize similarity between anchor and negative representations. In unsupervised CL, “negatives” are often sampled uniformly (Bachman et al., 2019), while “positives” are different views of the same object, e.g. via data augmentation (Chen et al., 2020). In supervised CL, negatives are different-class points and positives are same-class points (Khosla et al., 2020). In CNC, we instead treat same-class points with different ERM predictions as positives, and differentclass points with the same ERM prediction as negatives. This naturally provides “hard negative mining,” a challenge for standard CL (Robinson et al., 2021; Wu et al., 2021; Chuang et al., 2020). 7 CONCLUSION We present CNC, a two-stage CL approach to learn representations robust to spurious correlations. We theoretically analyze the connection between alignment and worst-group vs. average-group losses, and show that CNC achieves SOTA or near-SOTA worst-group accuracy across several benchmarks. ETHICS STATEMENT We hope that our work is another step towards the important goal of making machine learning models more fair and robust. However, while our work successfully improves worst-group accuracy, this is not necessarily an end-all be-all metric - other fairness-based metrics may be more suitable in certain settings. Also, misuse of metrics could lead to potential harm. To avoid these pitfalls, it is important for practitioners to understand the limitations and tradeoffs of different metrics, including when applying methods such as ours. REPRODUCIBILITY STATEMENT We have submitted our code as part of the supplementary materials. The datasets we use are publicly available (with the exception of CMNIST∗ which is a modification of the standard MNIST dataset (LeCun et al., 2010); our code to generate this modified dataset is also included). In addition to the details provided in Section 5, further implementation, dataset, and experimental details can be found in Appendix E. For the theory, we include complete proofs of all claims in Appendix B. A ADDITIONAL BENCHMARK COMPARISONS AND ABLATIONS In this section, we include further experiments comparing CNC against additional related methods. We also include additional ablations to study the importance of CNC’s presented design choices. A.1 COMPARISON TO MINIMIZING THE ALIGNMENT LOSS DIRECTLY In Sec. 5.1 and Sec. 5.2, we empirically showed that CNC’s contrastive loss and hard positive and negative sampling lead to improved worst-group accuracy and greater representation alignment. We now study how CNC performs if instead of the contrastive loss, we train the Stage 2 model to minimize Lalign directly. With this objective, we aim to minimize the Euclidean distance between samples in different inferred groups but the same class. We keep all other components of CNC consistent, and apply Lalign to the anchor and positive samples in each contrastive batch. We report results on CMNIST∗, Waterbirds, and CelebA in Table A.1. We find that CNC with the default contrastive loss outperforms CNC with the alignment loss. We reason that an advantage of the contrastive loss (and specifically the “hard” positive and negative samples), is that it encourages aligning samples with the same class label but different spurious features, and pushes apart hard negative samples with different class labels but similar spurious features. This provides additional signal for improving separation between the different classes, so the robust model only learns to rely on ground-truth-specific features for discriminating between datapoints. On the other hand, the Lalignment objective does not incorporate these hard negatives. A.2 COMPARISON TO REPRESENTATION ALIGNMENT METHODS FOR DOMAIN GENERALIZATION AND ADAPTATION While our main results in Table 1 compare against methods designed to tackle the spurious correlations setting presented in Section 5.1, we now study how CNC fares against existing representation alignment methods proposed in the domain generalization (DG) and unsupervised domain adaptation (UDA) literature. At a high level, a popular idea in DG and UDA is to learn similar representations for datapoints with the same class but sampled from different domains, e.g. via adversarial training to prevent another model from classifying representations’ source domains correctly (Ganin et al., 2016), or minimizing representation differences via metrics such as maximum mean discrepancy (MMD) (Li et al., 2018). While DG and UDA carry distinct problem settings and assumptions from our spurious correlations setting (c.f. Appendix D.4), we aim to understand if existing representation alignment methods can train models robust to spurious correlations, and compare their performance with CNC. We first explain our protocol for evaluating these methods, and then discuss results. We carry out our evaluation with domain-adversarial neural networks (DANN) Ganin et al. (2016), a seminal UDA method that aims to learn aligned representations across two domains. To do so, DANN jointly trains a model to classify samples from a “source” domain while preventing a separate “domain classifier” module from correctly classifying the domain for datapoints sampled from both domains. For fair comparison, we use the same ResNet-50 backbone as in CNC, and make several adjustments to the typical DANN and UDA procedure: 1. While UDA assumes that the data is organized into “source” and “target” domains, we do not have domain labels. We thus infer domains using the predictions of an initial ERM model as in CNC. 2. The notion of a domain may also be ambiguous with respect to the groups defined in Section 2. For example, domains may be defined by spurious attributes (e.g., for the Waterbirds dataset, we may consider the “water background” domain and the “land background” domain). Domains may alternatively be defined by whether samples carry dominant spurious correlations or not (e.g., the “majority group” domain and the “minority group” domain). We train and evaluate separate DANN models for both interpretations. We infer the former by the predicted class of the initial ERM model. We infer the latter by whether the initial ERM model is correct or not. 3. Finally, UDA aims to train with a class-labeled “source” domain and an unlabeled “target” domain such that a model performs well on unseen samples from the specified “target” domain (Ganin et al., 2016). However, our benchmarks have class labels for all training points, and do not have a notion of “source” and “target” domains (we aim to obtain high worst-group accuracy, which could fall under any domain). We thus assume access to labels for all domains. During training, the goal for our DANN models is to correctly classify samples from both domains, while learning representations such that a jointly trained domain classifier module cannot determine the samples’ domains from their representations alone. At test-time, we evaluate the DANN model on the entire test set for each benchmark, and report the worst-group and average accuracies. In Table A.2, we report the worst-group and average accuracies of DANN on the Waterbirds and CelebA datasets across three seeds along with the CNC results. Our results suggest that the domain alignment in DANN is not sufficient to improve worst-group accuracy. We hypothesize this is due to adversarial training with the domain classifier aligning representations without regard to different classes within each domain. Due to the propensity of samples exhibiting spurious correlations, DANN models may thus still learn to rely on these correlations. A.3 IMPORTANCE OF ERM-GUIDED CONTRASTIVE SAMPLING In this section we conduct additional ablations on the sampling procedure in CNC. Although CNC relies on an initial trained ERM model’s predictions, can we still improve worst-group accuracy without this step and with supervised contrastive learning alone, i.e. by sampling positives uniform randomly from all datapoints with the same label as the anchor? In Table 1, we showed that this approach (denoted SupCon∗) led to a drop in worst-group accuracy. Taking this question further, while we use the Stage 1 ERM model’s predictions to sample “hard” negatives with different groundtruth classes and the same ERM predictions as their anchors—such that to reduce the contrastive loss and learn dissimilar representations for anchors and negatives, the Stage 2 contrastive model must thus learn to ignore spurious features that the initial ERM model learns to depend on—how does CNC’s performance fare with alternative negative sampling procedures? Keeping the anchor and positive sampling consistent, we perform additional ablations where we either sample negatives only by having different classes as their anchors, or sample negatives only be having the same ERM model prediction as their anchors. We report these results in Table A.3 below. We find that the default CNC sampling procedure obtains highest worst-group accuracy and highest or near-highest average accuracy compared to alternative strategies across the CMNIST∗, Waterbirds, and CelebA datasets. The results suggests that inferring the spurious attributes (e.g. via an initial ERM model) is important for CNC, and that CNC benefits from using these predictions for sampling both negatives and positives. We reason this is because without this sampling, we can actually encourage the Stage 2 model to rely on spurious correlations. For example, if we just ensure that the anchor and negative samples have different classes, then the contrastive model may just rely on the different spurious features of the anchors and negatives to learn dissimilar representations. However, by ensuring that the anchors and negatives have similar spurious features (via the same trained ERM model prediction), the contrastive model is forced to rely on non-spurious features to learn dissimilar Negatives by different class 66.4 (5.1) 86.0 (1.6) 82.2 (0.8) 88.9 (0.3) 79.2 (0.3) 88.0 (0.1) Negatives by same prediction 70.0 (5.1) 87.1 (1.1) 85.7 (1.3) 90.3 (0.2) 81.1 (1.4) 88.5 (0.3) SupCon∗ 0.0 (0.0) 22.4 (1.2) 71.0 (1.9) 85.9 (0.8) 62.2 (1.1) 90.0 (0.1) CNC (default) 77.4 (3.0) 90.9 (0.6) 89.7 (0.2) 90.8 (0.1) 88.8 (0.9) 89.9 (0.5) representations for the samples. The same logic applies for learning similar representations for anchor and positive samples. We suspect that choosing negatives from all samples with the same ERM prediction as their anchors performs better than the other ablations as it alone does not encourage learning spurious correlations: the model is asked to “pull apart” samples with the same spurious features, and so must ignore spurious similarities to recognize something different between anchors and negatives. However, this ablation does not ensure that anchor-negative pairs consist of different classes (which our full method does), so the model gets less signal to separate samples by class. A.4 ADDITIONAL DESIGN CHOICE ABLATIONS We first summarize CNC’s design choices and differences from standard supervised contrastive learning in Appendix A.4.1. We then empirically validate each component in Appendix A.4.2. A.4.1 SUMMARY OF CNC DESIGN CHOICES AND PROPERTIES No projection network. As we wish to learn data representations that maximize the alignment between anchor and positive datapoints, we do not compute the contrastive loss with the outputs of an additional nonlinear projection network. This is inspired by the logic justifying a projection head in prior contrastive learning, e.g. SimCLR (Chen et al., 2020), where the head is included because the contrastive loss trains representations to be “invariant to data transformation” and may encourage removing information “such as the color or orientation of objects”. In our case, we view inferred datapoints with the same class but different spurious attributes as “transformations” of each other, and we hypothesize that removing these differences can help us improve worst-group performance. Two-sided contrastive sampling. To incorporate additional comparisons between datapoints that only differ in spurious attribute during training, we employ “two-sided” contrastive batch sampling. This lets us equally incorporate instances where the second contrastive model in CNC treats datapoints that the initial ERM model got incorrect and correct as anchors. Additional intrinsic hard positive/negative mining. Because the new model corrects for potentially learned spurious correlations by only comparing and contrasting datapoints that differ in class label or spurious attribute, but not both (as dictated by the initial ERM model’s outputs), the contrastive batches naturally carry “hard” positives and negatives. Thus, our approach provides a natural form of hard negative mining (in addition to the intrinsic hard positive / negative mining at the gradient level with InfoNCE-style contrastive losses (Chen et al., 2020; Khosla et al., 2020)) while avoiding class collisions, two nontrivial challenges in standard self-supervised contrastive learning (Robinson et al., 2021; Wu et al., 2021; Chuang et al., 2020). Joint training of encoder and classifier layers. CNC can train any standard classification model architecture; for any given neural network we just apply different optimization objectives to the encoder and classifier layers. We train both the encoder and classifier layers with a cross-entropy loss, and jointly train the encoder layer with a supervised contrastive loss. For the encoder layers, we balance the two objectives with a hyperparameter λ (c.f. Eq. 7). A.4.2 EMPIRICAL VALIDATION OF CNC COMPONENTS To validate the additional algorithmic components of CNC, we report how CNC performs on the Waterbirds dataset when modifying the individual design components. We use the same hyperpa- rameters as in the main results, and report accuracies as the average over three training runs for the following ablations. Table A.4 summarizes that across these design ablations, default CNC as presented consistently outperforms these alternative implementations. No projection head. We incorporate a nonlinear projection head as is typical in prior contrastive learning works (Chen et al., 2020), that maps the encoder output to lower-dimensional representations (from 2048 to 128 in our case). We then update the encoder layers and the projection head jointly by computing the contrastive loss on the projection head’s output, still passing the encoder layer’s direct outputs to the classifier to compute the cross-entropy loss. We note that using the projection head decreases worst-group accuracy substantially. We reason that as previously discussed, while using the projection head in prior work can allow the model to retain more information in its actual hidden layers (Chen et al., 2020), in our case to remove dependencies on spurious attributes we actually want to encourage learning invariant representations when we model the differences between anchor and positive datapoints as due to spurious attributes. Two-sided contrastive batches. Instead of “two-sided” contrasting where we allow both sampled anchors and positives to take on the anchor role, for each batch we only compute contrastive updates by comparing original positives and negatives with the original anchor. When keeping everything else the same, we find that just doing these one-sided comparisons also leads to a drop in performance for worst-group accuracy. This suggests that the increased number of comparisons and training setup where we swap the roles of anchors and positives of the two-sided batches introduces greater contrastive learning signal. Additional intrinsic hard positive/negative mining. We discuss this ablation in Section A.3. Joint training of encoder and classifier layers. Instead of training the full model jointly, we first only train the encoder layers with the contrastive loss in CNC, before freezing these layers and finetuning the classifier layers with the cross-entropy loss. With this implementation, we also obtain noticeable drop in performance. While we leave further analysis for the joint cross-entropy and contrastive optimization for future work, one conjecture is that the cross-entropy loss may aid in learning separable representations while also training the full model to keep the average error small. From our theory, the contrastive loss can help bound the gap between worst-group and average error. Thus we try to minimize average error in the same parameter update. This also follows prior work, where updating the entire model and finetuning all model parameters instead of freezing the encoder layers leads to higher accuracy (Chen et al., 2020). However, we found that with an initial encoder-only training stage, if we did not freeze the trained layers the fine-tuning on a dataset with spurious correlations would “revert” the contrastive training, resulting in a large gap between worst-group and average error similar to ERM. We also ablate the balancing hyperparameter λ of CNC on CMNIST∗. In Table A.5 we find that CNC consistently achieves high worst-group accuracy across a wide range of λ ∈ [0.4, 0.9]. For reference, the next best methods GEORGE and JTT obtain 76.4% and 74.5% worst-group accuracy. B OMITTED PROOFS FROM SECTION 3.2 In this section, we prove that within any class, the gap between the worst-group error and the average error can be upper bounded by the alignment loss times the Lipschitz constant, plus another concentration error term. Proof of Theorem 3.1. Consider two arbitrary groups, denoted by g1 = (y, a1) and g2 = (y, a2), whose class labels are both y ∈ Y , whose spurious attributes are a1 ∈ A and a2 ∈ A such that a1 6= a2. Let G1 and G2 be the subset of training data that belong to groups g1 and g2, respectively. We note that both G1 and G2 are non-empty since we have assumed that (in Section 2) there is at least one sample from each group in the training data set. Let ng1 = |G1| and ng2 = |G2| be the size of these two groups, respectively. Recall that fenc denotes the mapping of the encoder layers of the full neural network model fθ. Since the classification layer fcls is a linear layer, we have used W to denote the weight matrix of this layer. Our definition of the cross-group alignment loss in equation (5), denoted as L̂align(fθ; y), implies that for g1 and g2, 1 ng1 1 ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 ‖fenc(x)− fenc(x′)‖2 ≤ L̂align(fθ; y). (8) Next, let E(x,y,a1)∼Pg1 [Lavg(Wfenc(x), y)] be the average loss conditioning on a data point being sampled from group g1 (and similarly for group g2). Let ∆(g1, g2) be the difference between the population average losses: ∆(g1, g2) = ∣∣∣∣∣ E(x,y,a1)∼Pg1 [Lavg(Wfenc(x), y]− E(x,y,a2)∼Pg2 [Lavg(Wfenc(x), y)] ∣∣∣∣∣. Recall that Gy ⊆ G is the set of groups that have class label y. Since the loss `(·) is bounded above by some fixed constant C2 according to our assumption, and is at least zero, by the Hoeffding’s inequality, the following result holds with probability at least 1− δ, for all |Gy| groups g ∈ Gy ,∣∣∣∣∣∣ E(x,y,a)∼Pg [Lavg(Wfenc(x), y)]− 1ng ∑ (x,y)∈(X,Y ) `(Wfenc(x), y) ∣∣∣∣∣∣ ≤ C2 √ 2 log (|Gy| /δ) ng . (9) Thus, with probability at least 1 − δ, the following holds for any g1 and g2 in class y (but having different spurious attributes) ∆(g1, g2) ≤ ∣∣∣∣∣∣ 1ng1 ∑ (x,y,a1)∈G1 Lavg(Wfenc(x), y)− 1 ng2 ∑ (x′,y,a2)∈G2 Lavg(Wfenc(x′), y) ∣∣∣∣∣∣ (10) + C2 (√ 2 log(|Gy| /δ) ng1 + √ 2 log(|Gy| /δ) ng2 ) . Next, we focus on the RHS of equation (10). First, equation (10) is also equal to the following:∣∣∣∣∣∣ 1ng1 1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 `(Wfenc(x), y))− 1 ng1 1 ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 `(Wfenc(x ′), y)) ∣∣∣∣∣∣ . Since we have also assumed that the loss function `(x, y) is C1-Lipschitz in x2, the above is at most:∣∣∣∣∣∣ 1ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 |`(Wfenc(x), y)− `(Wfenc(x′), y)| ∣∣∣∣∣∣ ≤ 1 ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 C1 · ‖Wfenc(x)−Wfenc(x′)‖2 (since y is the same for x, x′) ≤ B ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 C1 · ‖fenc(x)− fenc(x′)‖2 (because ‖W‖2 ≤ B as assumed) ≤B · C1 · L̂align(fθ; y). (because of equation (8)) 2In other words, we assume that |`(z, y)− `(z′, y)| ≤ C1 · ‖z − z′‖2, for any z, z′ and y. Thus, we have shown that for any g1 and g2 within class y, ∆(g1, g2) ≤ B · L̂align(fθ; y) + (√ 2 log(|Gy| /δ) ng1 + √ 2 log(|Gy| /δ) ng2 ) ≤ B · C1 · L̂align(fθ; y) + max g∈Gy C2 · √ 8 log(|Gy| /δ) ng . (11) Finally, we use the above result to bound the gap between the worst-group loss and the average loss. For every group g ∈ G, let pg denote the prior probability of observing a sample from P in this group. Let qy = ∑ g′∈Gy pg′ . Let h(g) be a short hand notation for h(g) = E (x,y,a)∼Pg [Lavg(Wfenc(x), y)] . The average loss among the groups with class label y is Lavg(fθ; y) = ∑ g∈Gy pg qy h(g). The worstgroup loss among the groups with class label y is Lwg(fθ; y) = maxg∈Gy h(g). Let g? be a group that incurs the highest loss among groups in Gy . We have Lwg(fθ; y)− Lavg(fθ; y) is equal to h(g?)− ∑ g∈Gy pg qy h(g) = ∑ g∈Gy pg qy (h(g?)− h(g)) (12) ≤ ∑ g∈Gy pg qy ∆(g?, g) (13) ≤B · C1 · L̂align(fθ; y) + max g∈Gy C2 · √ 8 log(|G| /δ) ng . (14) The last step uses equation (11) on ∆(g?, g) and the fact that qy = ∑ g′∈Gy pg′ . Thus, we have shown that the gap between the worst-group loss and the average loss among the groups with the same class label is bounded by the above equation. The proof is now complete. The astute reader will note that Theorem 3.1 focuses on comparing groups within the same class y, for any y ∈ Y . A natural follow-up question is what happens when comparing across groups with different labels. Let Lwg(fθ) = maxy∈Y Lwg(fθ; y) be the worst-group loss across all the labels. Recall that Lavg(fθ) is the average loss for the entire population of data. We generalize Theorem 3.1 to this setting in the following result. Corollary B.1 (Extension of Theorem 3.1 to compare across different classes). In the setting of Theorem 3.1, let qy = ∑ g∈Gy pg be the prior probability of observing a sample drawn from P with label y, for any y ∈ Y . We have that with probability at least 1− δ, the following holds: Lwg(fθ) ≤ ( min y∈Y qy )−1 Lavg(fθ) +B · C1 ·max y∈Y L̂align(fθ; y) + max g∈G C2 · √ 8 log(|G| /δ) ng . (15) Proof. We generalize the argument in the previous result to compare across different labels. The worst-group loss across different labels is max y∈Y max g∈Gy h(g) ≤max y∈Y ∑ g∈Gy pg qy h(g) +B · C1L̂align(fθ; y) + max g∈Gy C2 √ 8 log(|Gy| /δ) ng (because of equation (14)) ≤ 1 miny∈Y qy ∑ g∈Gy pgh(g) +B · C1 max y∈Y L̂align(fθ; y) + max g∈G C2 √ 8 log(|G| /δ) ng . Since ∑ g∈G pgh(g) = Lavg(fθ), we thus conclude that Lwg(fθ) ≤ ( min y∈Y qy )−1 Lavg(fθ) +B · C1 max y∈Y L̂align(fθ; y) + max g∈G C2 √ 8 log(|G| /δ) ng . The proof is now complete. An example showing that Corollary B.1 is tight. We describe a simple example in which the factor( miny∈Y qy )−1 in equation (15) is tight (asymptotically). Suppose there are k perfectly balanced classes so that qy = 1/k, for every y ∈ Y . There is one data point from each class, with loss equal to 0 for all except one of them. The worst-group loss is 1 whereas the average loss is 1/k. Thus, there is a factor of k between the worst-group loss and the average loss. For equation (15), the factor( min y∈Y qy )−1 = k, since qy = 1/k for every y ∈ Y in this example. Thus, this factor matches the (multiplicative) factor between the worst-group loss and the average loss in this example. C CONTRASTIVE ALGORITHM DESIGN DETAILS In this section, we provide further details on the training setup and contrastive batch sampling, pseudocode, and additional properties related to CNC’s implementation. C.1 TRAINING SETUP In Fig. 8, we illustrate the two training stages of Correct-N-Contrast described in Sec. 4. In Stage 1, we first train an ERM model with a cross-entropy loss. For consistency with Stage 2, we depict the output as a composition of the encoder and linear classifier layers. Then in Stage 2, we train a new model with the same architecture using contrastive batches sampled with the Stage 1 ERM model and a supervised contrastive loss (3) (which we compute after the depicted representations are first normalized) to update the encoder layers. Note that unlike prior work in contrastive learning (Chen et al., 2020; Khosla et al., 2020), as we have the class labels of the anchors, positives, and negatives, we also continue forward-passing the unnormalized representations (encoder layer outputs) and compute a cross-entropy loss to update the classifier layers while jointly training the encoder. 2048-D 2-D We also note that unlike prior work, we wish to learn invariances between anchors and positives that maximally reduce the presence of features not needed for classification. We thus do not pass the representations through an additional projection network (Chen et al., 2020). Instead, we use Eq. 3 to compute the supervised contrastive loss directly on the encoder outputs z = fenc(x). In Appendix A.4.2, we studied ablations with both design choices. C.2 TWO-SIDED CONTRASTIVE BATCH IMPLEMENTATION We provide more details on our default contrastive batch sampling approach described in Sec. 4. To recall, for additional contrastive signal per batch, we can double the pairwise comparisons in a training batch by switching the anchor and positive roles. This is similar to the NT-Xent loss in prior contrastive learning work (Chen et al., 2020). We switch the role of the anchor and first positive sampled in a contrastive batch, and sample additional positives and negatives using the same guidelines but adjusting for the “new” anchor. We denote this as “two-sided” sampling in contrast with the “one-sided” comparisons we get with just the original anchor, positives, and negatives. Implementing this sampling procedure in practice is simple. First, recall our initial setup with trained ERM model fθ̂, its predictions {ŷi} n i=1 on training data {(xi, yi)}ni=1 (where ŷi = fθ̂(xi)), and number of positives and negatives to sample M and N . We then sample batches with Algorithm 2. Because the initial anchors are then datapoints that the ERM model gets correct, under our heuristic we infer {xi}Mi=1 as samples from the majority group. Similarly the M positives {x+m}Mm=1 and N negatives {x−n }Nn=1 that it gets incorrect are inferred to belong to minority groups. For one batch, we then compute the full contrastive loss with L̂supcon(fenc) = L̂supcon ( x1, {x+m}Mm=1, {x−n }Nn=1; fenc ) + L̂supcon ( x+1 , {xi}Mi=1, {x′−n }Nn=1; fenc ) (16) where L̂supcon ( x1, {x+m}Mm=1, {x−n }Nn=1; fenc ) is given by: − 1 M M∑ m=1 log exp(z>1 z + m/τ)∑M m=1 exp(z > 1 z + m/τ) + ∑N n=1 exp(z > 1 z + n /τ) (17) Algorithm 2 Sampling two-sided contrastive batches Require: Number of positives M and number of negatives N to sample for each batch. 1: Initialize set of contrastive batches B = {} 2: for each xi ∈ {xi ∈ X : ŷi = yi} do 3: Sample M − 1 additional “anchors” to obtain {xi}Mi=1 from {xi ∈ X : ŷi = yi} 4: Sample M positives {x+m}Mm=1 from {x−m ∈ X : ŷ−m = ŷi, y−m 6= yi} 5: Sample N negatives {x−n }Nn=1 from {x−n ∈ X : ŷ−n = ŷi, y−n 6= yi} 6: Sample N negatives {x′−n }Nn=1 from {x′−n ∈ X : ŷ′−n = ŷ+1 , y′−n 6= y + 1 } 7: Update contrastive batch set: B ← B ∪ ( {xi}Mi=1, {x+m}Mm=1, {x−n }Nn=1, {x′−n }Nn=1 ) and again let z be the normalized output fenc(x) for corresponding x. We compute the cross-entropy component of the full loss for each x in the two-sided batch with its corresponding label y. D FURTHER RELATED WORK DISCUSSION We provide additional discussion of related work and connections to our work below. D.1 IMPROVING ROBUSTNESS TO SPURIOUS CORRELATIONS Our core objective is to improve model robustness to group or subpopulation distribution shifts that arise from the presence of spurious correlations, specifically for classification tasks. Because these learnable correlations hold for some but not all samples in a dataset, standard training with ERM may result in highly variable performance: a model that classifies datapoints based on spurious correlations does well for some subsets or “groups” of the data but not others. To improve model robustness and avoid learning spurious correlations, prior work introduces the goal to maximize worst-group accuracy (Sagawa et al., 2019). Related works broadly fall under two categories: Improving robustness with group information. If information such as spurious attribute labels is provided, one can divide the data into explicit groups as defined in Sec. 2, and then train to directly minimize the worst group-level error among these groups. This is done in group DRO (GDRO) (Sagawa et al., 2019), where the authors propose an online training algorithm that focuses training updates over datapoints from higher-loss groups. Goel et al. (2020) also adopt this approach with their method CycleGAN Augmented Model Patching (CAMEL). However, similar to our motivation, they argue that a stronger modeling goal should be placed on preventing a model from learning group-specific features. Their approach involves first training a CycleCAN (Zhu et al., 2017) to learn the data transformations from datapoints in one group to another that share the same class label. They then apply these transformations as data augmentations to different samples, intuitively generating new versions of the original samples that take on group-specific features. Finally they train a new model with a consistency regularization objective to learn invariant features between transformed samples and their sources. Unlike their consistency loss, we accomplish a similar objective to learn group-invariant features with contrastive learning. Our first training stage is also less expensive. Instead of training a CycleGAN and then using it to augment datapoints, we train a relatively simple standard ERM classification model, sometimes with only a few number of epochs, and use its predictions to identify pairs of datapoints to serve a similar purpose. Finally, unlike both CAMEL and GDRO, we do not require spurious attribute or group labels for each training datapoints. We can then apply CNC in less restrictive settings where such information is not known. Related to GDRO are methods that aim to optimize a "Pareto-fair" objective, more general than simply the worst-case group performance. Notable examples are the works of Balashankar et al. (2019) and Martinez et al. (2020). However, these approaches similarly do not directly optimize for good representation alignment (unlike our work). Improving robustness without training group information. More similar to our approach are methods that do not assume group information at training time, and only require validation set spurious attribute labels for fine-tuning. As validation sets are typically much smaller in size than training sets, an advantage of CNC and comparable methods is that we can improve the accessibility of robust training methods to a wider set of problems. One popular line of work is distributionally robust optimization (DRO), which trains models to minimize the worst loss within a ball centered around the observed distribution (Ben-Tal et al., 2013; Wiesemann et al., 2014; Duchi & Namkoong, 2019; Levy et al., 2020; Curi et al., 2020; Oren et al., 2019). This includes the CVaR DRO (Levy et al., 2020) method we evaluate against. However, prior work has shown that these approaches may be too pessimistic, optimizing not just for worst-group accuracy but worst possible accuracy within the distribution balls (Sagawa et al., 2019), or too undirected, optimizing for too many subpopulations, e.g. by first upweighting minority points but then upweighting majority points in later stages of training (Liu et al., 2021). Pezeshki et al. (2020) instead suggest that gradient starvation (GS), where neural networks only learn to capture statistically dominant features in the data (Combes et al., 2018), is the main culprit behind learning spurious correlations, and introduce a “spectral decoupling” regularizer to alleviate GS. However this does not directly prevent models from learning dependencies on spurious attributes. Similar to CAMEL, Taghanaki et al. (2021) propose Contrastive Input Morphing (CIM), an image dataset-specific method that aims to learn input feature transformations that remove the effects of spurious or task-irrelevant attributes. They do so without group labels, training a transformation network with a triplet loss to transform input images such that a given transformed image’s structural similarity metric (based on luminance, contrast, and structure (Wang et al., 2003)) is more similar to a “positive” image from the same class than a “negative” image from a different class. They then train a classifier on top of these representations. Instead of pixel-level similarity metrics, CNC enforces similarity in a neural network’s hidden-layer representations, allowing CNC to apply to non-image modalities. Additionally, we sample positives and negatives not just based on class label, but also the learned spurious correlations of an ERM model (via its trained predictions). We hypothesize that our sampling scheme, which intuitively provides "harder" positive and negative examples, allows CNC to more strongly overcome spurious correlations. Most similar to our approach are methods that first train an initial ERM model with the class labels as a way to identify data points belonging to minority groups, and subsequently train an additional model with greater emphasis on the estimated minority groups. Sohoni et al. (2020) demonstrate that even when only trained on the class labels, neural networks learn feature representations that can be clustered into groups of data exhibiting different spurious attributes. They use the resulting cluster labels as estimated group labels before running GDRO on these estimated groups. Meanwhile, Nam et al. (2020) train a pair of models, where one model minimizes a generalized cross-entropy loss (Zhang & Sabuncu, 2018), such that the datapoints this model classifies incorrectly largely correspond to those in the minority group. They then train the other model on the same data but upweight the minority-group-estimated points. While they interweave training of the biased and robust model, Liu et al. (2021) instead train one model first with a shortened training time (but the standard cross-entropy objective), and show that then upsampling the incorrect data points and training another model with ERM can yield higher worst-group accuracy. Creager et al. (2021) first train an ERM model, and then softly assign the training data into groups under which the initial trained ERM model would maximally violate the invariant risk minimization (IRM) objective. In particular, the IRM objective is maximally satisfied if a model’s optimal classifier is the same across groups (Arjovsky et al., 2019), and EIIL groups are inferred such that the initial ERM model’s representations exhibit maximum variance within each group. Finally, Nagarajan et al. (2020) provides a theoretical understanding of how ERM picks up spurious features under data set imbalance. They consider a setting involve a single spurious feature that is correlated with the class label and analyze the max-margin classifier in the presence of this spurious feature. In our work, we demonstrate that the ERM model’s predictions can be leveraged to not only estimate groups and train a new model with supervised learning but with different weightings. Instead, we can specifically identify pairs of points that a contrastive model can then learn invariant features between. Our core contribution comes from rethinking the objective with a contrastive loss that more directly reduces the model’s ability to learning spurious correlations. D.2 CONTRASTIVE LEARNING Our method also uses contrastive learning, a simple yet powerful framework for both self-supervised (Chen et al., 2020; Oord et al., 2018; Tian et al., 2019; Song & Ermon, 2020; Sermanet et al., 2018; Hassani & Khasahmadi, 2020; Robinson et al., 2021) and supervised (Khosla et al., 2020; Gunel et al., 2021) representation learning. The core idea is to learn data representations that maximize the similarity between a given input “anchor” and distinct different views of the same input (“positives”). Frequently this also involves contrasting positives with “negative” data samples without any assumed relation to the anchor (Bachman et al., 2019). Core components then include some way to source multiple views, e.g. with data transformations (Chen et al., 2020), and training objectives similar to noise contrastive estimation (Gutmann & Hyvärinen, 2010; Mnih & Kavukcuoglu, 2013). An important component of contrastive learning is the method by which appropriate positives and negatives are gathered. For sampling positives, Chen et al. (2020) show that certain data augmentations (e.g. crops and cutouts) may be more beneficial than others (e.g. Gaussian noise and Sobel filtering) when generating anchors and positives for unsupervised contrastive learning. von Kügelgen et al. (2021) theoretically study how data augmentations help contrastive models learn core content attributes which are invariant to different observed “style changes”. They propose a latent variable model for self-supervised learning. Tian et al. (2020) further study what makes good views for contrastive learning. They propose an “InfoMin principle”, where anchors and positives should share the least information necessary for the contrastive model to do well on the downstream task. For sampling negatives, Robinson et al. (2021) show that contrastive learning also benefits from using “hard” negatives, which (1) are actually a different class from the anchor (which they approximate in the unsupervised setting) and (2) embed closest to the anchor under the encoder’s current data representation. Both of these approaches capture the principle that if positives are always too similar to the anchor and negatives are always too different, then contrastive learning may be inefficient at learning generalizable representations of the underlying classes. In our work, we incorporate this principle by sampling data points with the same class label but different ERM predictions–presumably because of spurious attribute differences–as anchor and positive views, while sampling negatives from data points with different class labels but the same ERM prediction as the anchor. The anchors and positives are different enough that a trained ERM model predicted them differently, while the anchors and negatives are similar enough that the trained ERM model predicted them the same. Contrasting the above then allows us to exploit both “hard” positive and negative criteria for our downstream classification task. In Appendix A.3, we show that removing this ERM-guided sampling (i.e. only sampling positives and negatives based on class information), as well as trying different negative sampling procedures, leads to substantially lower worst-group accuracy with CNC. One limitation of our current theoretical analysis regarding the alignment loss (cf. Section 3.2) is that we require knowing the group labels to compute the RHS of equation (6) (in particular, the alignment loss). An interesting question for future work is to provide a better theoretical understanding of the alignment induced by CNC in the context of spurious correlations. D.3 LEARNING INVARIANT REPRESENTATIONS Our work is also similar in motivation to Invariant Risk Minimization (IRM) (Arjovsky et al., 2019), Predictive Group Invariance (PGI) (Ahmed et al., 2021), and other related works in domain-invariant learning (Krueger et al., 2020; Parascandolo et al., 2020; Ahuja et al., 2020; Creager et al., 2021). These methods aim to train models that learn a single invariant representation that is consistently optimal (e.g. with respect to classifying data) across different domains or environments. These environments can be thought of as data groups, and while traditionally methods such as IRM require that environment labels are known, recent approaches such as Environment Inference for Invariant Learning (EIIL) (Creager et al., 2021) and Predictive Group Invariance (PGI) (Ahmed et al., 2021) similarly aim to infer environments with an initial ERM model. In EIIL, they next train a more robust model with an invariant learning objective, similarly selecting models based on the worst-group error on the validation set. However, they train this model using IRM or Group DRO with the inferred environments as group labels
1. What is the main contribution of the paper regarding spurious correlations in model training? 2. What are the strengths and weaknesses of the proposed method compared to existing works? 3. How does the reviewer assess the significance of the results, particularly in terms of standard deviations and comparisons with other methods? 4. What are some minor questions or suggestions regarding notation and explanation in the paper?
Summary Of The Paper Review
Summary Of The Paper The paper studies training of models that are robust to spurious correlations without any supervision on spurious correlation revealing example groupings. They observe that representations from an ERM model trained on such data, have strong correlation with the spurious attribute. Their algorithm (CNC) is aimed at repressing spurious attribute information from the representations and was shown to be effective on standard sub-population shift datasets. Also, CNC is shown to be more robust to noise in spurious attribute recovery than the existing methods. Review Strength The paper is mostly well written and easy to follow. Spurious correlation avoidance without assuming any prior knowledge of spurious features is an important problem with wide impact. Weakness Standard Deviations are important when reporting worst group accuracy metric. For example, the smallest group in CelebA (with supervised group labels) has only around 100 examples. Expect to see std. dev. from at least three runs for all the experiments. Looks like all your numbers are from a single run, given the typical std. dev. on these datasets and metric, I cannot judge the significance of your results. More Comparisons needed. There are some existing contrastive regularization based methods [1, 2], perhaps several more. Authors should compare and argue the merits of theirs over others both intuitively or analytically and empirically. [1] proposed to learn representations that minimize the divergence between predictions on examples of the same label class but different partition (group). In that regard, I find this work very similar to [1] with some differences as below: [1] regularizes the differences in aggregate prediction probabilities while this paper minimize per-sample differences [1] looks at prediction probability differences and KL measure while this work looks at representation differences and Eucledian distance. [1] uses [3] for partitioning the dataset on spurious attribute while this work only uses the ERM trained base model. The differences to me look only superficial. Ablation study on importance of first stage ERM training is needed. The authors state that the prediction accuracy of the spurious attribute in the case of CelebA is only around 59%. I wonder what is the significance of ERM base model at all and what would happen if we instead simply regularize the distances for any pair of points of the same true class (this would be similar to [2]). Sensitivity to Stage 1 prediction (Sec 5.3): I do not understand why in Fig. 7 (d), we see the average accuracy also decreasing with p. I expect the average accuracy to remain stable or increase as the worst group accuracy deteriorates. Both average and worst group accuracy decreasing could indicate optimization problems? Also, can you intuitively explain why you expect CNC to be more robust than JTT to stage 1 predictions? Also in Table 1, why is the Avg. accuracy of CNC so much worse than other methods on CivilComments dataset? Minor Around expression (5), inconsistency around use of hat: L ^ a l i g n and L a l i g n . Again around the same expression, stick to using either L ^ a l i g n ( f θ ) or L ^ a l i g n ( f e n c ) . The theorem looks intuitive but I could not follow the proof due to notation difficulties. References Ahmed, F., Bengio, Y., van Seijen, H. and Courville, A., 2020, September. Systematic generalisation with group invariant predictions. In International Conference on Learning Representations. Arpit, D., Xiong, C. and Socher, R., 2019. Predicting with high correlation features. arXiv preprint arXiv:1910.00164. Creager, E., Jacobsen, J.H. and Zemel, R., 2021, July. Environment inference for invariant learning. In International Conference on Machine Learning (pp. 2189-2200). PMLR.
ICLR
Title Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations Abstract Spurious correlations pose a fundamental challenge for building robust machine learning models. For example, models trained with empirical risk minimization (ERM) may depend on correlations between class labels and spurious features to classify data, even if these relations only hold for certain data groups. This can result in poor performance on other groups that do not exhibit such relations. When group information is available during training, Sagawa et al. (2019) have shown how to improve worst-group performance by optimizing the worst-group loss (GDRO). However, when group information is unavailable, improving worst-group performance is more challenging. For this latter setting, we propose Correct-NContrast (CNC), a contrastive learning method to train models more robust to spurious correlations. Our motivating observation is that worst-group performance is related to a representation alignment loss, which measures the distance in feature space between different groups within each class. We prove that the gap between worst-group and average loss for each class is upper bounded by the alignment loss for that class. Thus, CNC aims to improve representation alignment via contrastive learning. First, CNC uses an ERM model to infer the group information. Second, with a careful sampling scheme, CNC trains a contrastive model to encourage similar representations for groups in the same class. We show that CNC significantly improves worst-group accuracy over existing state-of-the-art methods on popular benchmarks, e.g., achieving 7.7% absolute lift in worst-group accuracy on the CelebA data set, and performs almost as well as GDRO trained with group labels. CNC also learns better-aligned representations between different groups in each class, reducing the alignment loss substantially compared to prior methods. 1 INTRODUCTION For many tasks, deep neural networks are negatively affected by spurious correlations—dependencies between observed features and class labels that only hold for certain groups of the data. For example, consider classifying images of cows or camels, where 90% of cow images depict grassy backgrounds. A model may learn to predict the “cow” class based on the background, and incorrectly classify cow images with non-grass backgrounds as camels (Ribeiro et al., 2016; Beery et al., 2018; Kaufman et al., 2012). This illustrates a widespread issue where neural networks can achieve low test error on certain groups, yet high error on others (Blodgett et al., 2016; Buolamwini & Gebru, 2018; Hashimoto et al., 2018; Sagawa et al., 2019). Prior works have shown that this problem is increasingly aggravated as the correlations between class labels and spurious features become stronger (Sagawa et al., 2020) and easier to learn (Arpit et al., 2017; Hermann & Lampinen, 2020). Since spurious correlations arise in many settings, we wish to design robust methods that perform well on all groups. How can we obtain neural networks robust to spurious correlations? If group-defining information (i.e. spurious attributes) is known, a common solution is to minimize the worst-group loss, e.g., with group DRO (GDRO) (Sagawa et al., 2019). However, such information may be expensive to collect, and we may not know the spurious attributes a priori in a given data set (Oakden-Rayner et al., 2020). When group information is unavailable, prior works typically take a two-stage approach. They first train an ERM model, and then use this model to infer groups and train a more robust model. For example, Sohoni et al. (2020) find that ERM models still learn group-specific features when trained to predict class labels. After first training an ERM model, they infer groups by clustering the ERM Under review as a conference paper at ICLR 2022 Landbird, Land BG ✅ ✅ ✅ ✅ ERM Ours Landbird, Water BG Landbird, Land BG Waterbird, Land BG Waterbird, Water BG Landbird, Water BG Landbird, Land BG Waterbird, Land BG Waterbird, Water BG ❎ ✅❎✅ ERM ✓ ✗ GradCAM Correct? Correct? “Waterbird” “Landbird”“Waterbird” “Waterbird” “Landbird” “Waterbird” ✗ Sample contrastive batches “Landbird” “Waterbird” “Waterbird” model’s representations, and train a new model with GDRO using these inferred groups. Creager et al. (2021) identify groups under which an initial trained ERM model would maximally violate the invariant risk minimization (IRM) objective (Arjovsky et al., 2019). With these groups they train a new model with GDRO or IRM. Nam et al. (2020); Liu et l. (2021) observe that ERM models often misclassify data points in minority groups, and thus train another model with re-weighted or upsampled points misclassified by an initial ERM model. While these methods promisingly leverage ERM learned biases to significantly improve worst-group error without training group labels, there is still a gap between their robust performance and methods’ such as GDRO that use group labels. In this work, we ask how else we can improve model robustness using a trained ERM model, and aim to close this gap by focusing on improving the learned representations of the robust model in the second stage. We support this direction with two key motivations. First, we find that higher worst-group performance consistently correlates with hidden-layer representations exhibiting higher dependence on class labels than spurious attributes. We quantify this correlation using geometric representation alignment (Wang & Isola, 2020), which measures the closeness of samples with the same class but different spurious attributes in the model feature space, and mutual information. This relation consistently holds across various data sets, and explains when prior upweighting methods improve worst-group error over ERM (Fig. 4). Second, we theoretically show that a model’s representation alignment for a given class can be used to upper bound the gap between its worst-group and average loss for that class. Thus, if we can improve representation alignment for a class, we can reduce the gap between worst-group and average loss for that class. We thus propose Correct-N-Contrast (CNC), a two-stage procedure using contrastive learning to encourage better representation alignment within each class. In the first stage, we train a regularized ERM model similar to prior work (Liu et al., 2021; Creager et al., 2021), under the premise that ERM predictions help infer group information (i.e., spurious attributes). In the second stage, we wish to improve representation alignment by “pulling together” same-class datapoints and “pushing apart” different-class datapoints, regardless of their individual groups or spurious features. To do so via supervised contrastive learning, we use the heuristic that samples with the same ERM predictions exhibit similar spurious features (and vice versa). With a randomly sampled anchor, we select samples with the same class but different ERM predictions as “positives” we want to pull together, and samples from different classes but the same ERM prediction as hard “negatives” we want to push apart. Training a second model with this sampling scheme and supervised contrastive learning encourages this model to ignore spurious correlations that the initial ERM model learned, and improves representation alignment between same-class data points. Thus, CNC corrects for the ERM model’s mistakes with contrastive learning in the second model. We evaluate CNC on four popular and diverse spurious correlation benchmarks. Among methods that similarly do not assume training group labels, CNC substantially improves worst-group accuracy, obtaining up to 7.7% absolute lift (from 81.1% to 88.8% on CelebA) over the prior state-of-the-art JTT (Liu et al., 2021), and averaging 3.4% lift across the four tasks. We also find that CNC nearly closes the gap in worst-group accuracy with robust training methods that assume training group labels, only falling short of GDRO’s worst-group accuracy by 0.8% absolute. Finally, we validate that CNC indeed reduces the alignment loss compared to prior methods. This corresponds to an up to 71.1% smaller gap between worst-group versus average accuracy for data points in the same class. Contributions. We summarize our contributions as follows: 1. We empirically show that a model’s worst-group performance correlates with the model’s alignment loss between different groups within a class, and analyze this connection theoretically. 2. We propose CNC, a two-stage contrastive approach to improve representation alignment and thereby learn representations robust to spurious correlations. 3. We validate that CNC significantly improves worst-group accuracy over existing methods on various benchmarks, and learns better-aligned representations less reliant on spurious features. 2 PRELIMINARIES Problem setup. We present our setting and the loss objectives following Sagawa et al. (2019). Let X = {x1, . . . , xn} and Y = {y1, . . . , yn} be a training data set of size n. Each data point has an observed feature vector xi ∈ X , label yi ∈ Y , and unobserved spurious attribute ai ∈ A. The set of groups G is defined as the set of all combinations of class label and spurious attribute pairs, i.e. G = Y ×A. Let C = |Y| be the number of classes and K = |G| be the number of groups. Following the classical supervised learning setting, we assume that each example (xi, yi, ai) is drawn from an unknown joint distribution P . We assume that at least one sample from each group is observed in the training data. Let Pg be the distribution conditioning on (y, a) = g, for any g ∈ G. Given a model fθ : X 7→ RC and a convex loss ` : X × Y 7→ R, let the worst-group loss be: Lwg(fθ) := max g∈G E(x,y,a)∼Pg [`(fθ(x), y)]. (1) ERM minimizes the training loss as a surrogate for the expected population loss Lavg: Lavg(fθ) := E(x,y,a)∼P [`(fθ(x), y)] (2) While ERM is the standard way to train neural nets, spurious correlations often cause ERM to obtain high error on minority groups even when average error is low. Group DRO, which minimizes the empirical version of (1), is recognized as a strong baseline for improving worst-group error when the group labels {a1, . . . , an} are available during training (Sagawa et al., 2019). In contrast, we focus on the more challenging setting in which the group labels are not available during training. Contrastive learning. We briefly describe contrastive learning (Chen et al., 2020), a central component of our approach. Let fθ be a neural network model with parameters θ. Let the encoder fenc : X 7→ Rd be the feature representation layers of fθ. Let fcls : Rd 7→ RC be the classification layer of fθ, which maps encoder representations to one-hot label vectors. We learn fenc with the supervised contrastive loss Lsupcon proposed in Khosla et al. (2020). For each anchor x, we sample M positives {x+i }Mi=1 and N negatives {x − i }Ni=1. Let y, {y + i }Mi=1, {y − i }Ni=1 be the labels and z, {z+i }Mi=1, {z − i }Ni=1 be the normalized outputs of fenc(x) for the anchor, positives, and negatives respectively. With input x mapped to z, the training objective for the encoder is to minimize: Lsupcon(x; fenc) = E x,{x+i }Mi=1,{x − i }Nj=1 [ − log exp(z >z+i /τ)∑M m=1 exp(z >z+m/τ) + ∑N n=1 exp(z >z−n /τ) ] (3) where τ > 0 is a scalar temperature hyperparameter. Minimizing Eq. 3 leads to z being closer to z+ than z− in feature space. See Sec. 6 for further references related to contrastive learning. 3 MOTIVATIONS FOR REPRESENTATION ALIGNMENT To motivate our method, we present our core observation that a model’s worst-group accuracy correlates with how well its learned representations depends on the class labels, but not the spurious attributes. First, we empirically observe that ERM learns spurious correlations by inspecting their hidden layer representations on several spuriously correlated data sets. We find that ERM’s worstgroup performance is inversely related to a cross-group alignment loss (cf. Eq. (4) below) and mutual information metrics. Second, we theoretically prove that this alignment loss serves as an upper bound on the gap between the average-group loss and the worst-group loss (cf. Theorem 3.1). 3.1 RELATING WORST-GROUP PERFORMANCE TO REPRESENTATION ALIGNMENT We first show that when neural networks are trained with standard ERM on spuriously correlated data, their hidden layer representations exhibit high dependence on the spurious attribute. We quantify this behavior using representation alignment (cf. Eq. (4) below) and mutual information metrics. We observe that these metrics explain trends in ERM’s worst-group accuracy on various spuriously correlated data sets. This relationship is also consistent and applies to upsampling methods (JTT) that mitigate the impact of spurious features (Liu et al., 2021). We model spurious correlations with CMNIST∗, a colored MNIST data set inspired by Arjovsky et al. (2019). There are 5 digit classes and 5 colors. We color a fraction pcorr of the training samples with a color a associated with each class y, and color the test samples uniform-randomly. To analyze learned representations, we train a LeNet-5 CNN (LeCun et al., 1989) with ERM to predict digit classes, and inspect the outputs of the last hidden layer z = fenc(x). As shown in Fig. 2, with low pcorr, models learn representations with high dependence on the actual digit classes. However, with high pcorr we learn z highly dependent on a, despite only training to predict y. Representation metrics. To quantify this behavior, we use two metrics designed to capture how well the learned representations exhibit dependence on the class label vs. the spurious attributes. First, we compute an alignment loss L̂align(fenc; g, g′) between two groups g = (y, a) and g′ = (y, a′) where a 6= a′. This measures how well fenc maps samples with the same class, but different spurious attributes, to nearby vectors via Euclidean distance. Letting G and G′ be the subsets of training data in groups g and g′ respectively, and x and x′ be any two samples in G and G′, we define: L̂align(fenc; g, g′) := 1 |G| 1 |G′| ∑ (x,y,a)∈G ∑ (x′,y,a′)∈G′ ‖fenc(x)− fenc(x′)‖2. (4) Thus, lower L̂align means better alignment. We also quantify representation dependence by estimating the mutual information (MI) of a model’s learned representations with the class label, i.e. Î(Y ;Z) and the spurious attributes Î(A;Z). We defer computational details to Appendix E. (b) (e) (f) (g) (h) Results for ERM. In Fig. 3 we show a strong association between worst-group error and both alignment and mutual information metrics. As pcorr increases, ERM models not only drop in worst-group accuracy, but also incur higher alignment loss (Fig. 3ab). Fig. 3c further illustrates this with mutual information. We plot the estimated mutual information and worst-group accuracy for models at each epoch. A substantial drop in worst-group accuracy occurs with high Î(A;Z) (especially when Î(A;Z) > Î(Y ;Z), even with high Î(Y ;Z)). Fig. 3d also captures this trend with a trade off between high Î(Y ;Z) with Î(A;Z) as pcorr increases (Fig. 3a). 4 Results for JTT. In Fig. 4, we also show that this relation holds when training with another recent (upsampling) approach, JTT (Liu et al., 2021). With high pcorr, models now achieve higher worstgroup accuracy, and this corresponds to learning representations with high class label and low spurious attribute dependence. We note however that previous approaches do not explicitly optimize for these representation metrics, suggesting a new direction to improve worst-group performance. 3.2 RELATING ALIGNMENT LOSS TO WORST-GROUP LOSS The empirical observations in Fig. 3 suggest that lower alignment loss correlates with lower worstgroup error. Next, we show that this connection applies much more generally. We show that the maximum of L̂align(fenc; g, g′), over any two groups g, g′ within the same class, can be used to upper bound the gap between the worst-group loss and average loss for that class. We set up several notations before stating the result. For any class label y ∈ Y , let Gy be the set of groups with label y in G. Let Lwg(fθ; y) be the worst-group loss among groups in Gy: Lwg(fθ; y) := max g∈Gy E (x,ỹ,a)∼Pg [`(fθ(x), ỹ)] . Let Lavg(fθ; y) be the average loss among groups in Gy: Lavg(fθ; y) := E (x,ỹ,a)∼P :∀a∈A [`(fθ(x), ỹ)] . Additionally, we define a class-specific alignment loss L̂align(fenc; y) among groups in Gy . Recall that fθ involves an encoding function fenc and a linear classification layer fcls. We define L̂align(fenc; y) as the largest cross-group alignment loss among groups in Gy: L̂align(fθ; y) := max g∈Gy,g′∈Gy : g 6=g′ L̂align(fenc; g, g′). (5) where L̂align(fenc; g, g′) is the alignment loss between g and g′ defined in Eq. (4). Our main result is that L̂align(fθ; y) is an upper bound on the gap between Lwg(fθ; y) and Lavg(fθ; y) (up to a norm multiplier and a concentration error), for any y ∈ Y . Theorem 3.1 (Alignment loss upper bounds the gap between worst-group and average-group loss). In the setting described above, let fθ be any neural network satisfying that the weight matrix of the linear classification layer W in fcls satisfies that ‖W‖2 ≤ B, for some constant B. Let ng be the size of any group g ∈ G in the training data set. Assume that the loss function `(x, y) is C1-Lipschitz in x and bounded from above by C2, for some positive constants C1, C2. Then, with probability at least 1− δ over the randomness of the training data set samples, for any class y ∈ Y , the following holds: Lwg(fθ; y) ≤ Lavg(fθ; y) +B · C1 · L̂align(fθ; y) + max g∈Gy C2 √ 8 log(|Gy|/δ) ng . (6) The proof of Theorem 3.1 is deferred to Sec. B. Since we also know that Lavg(fθ; y) ≤ Lwg(fθ; y), the above result implies that in order to reduce the gap between the worst-group loss and the average loss for class y, it suffices to reduce the alignment loss L̂align(fθ; y). Broader algorithmic implications. We summarize Section 3 with two takeaways: (1) When trained on spuriously correlated data sets, ERM networks learn data representations highly dependent on spurious attributes. Clusters of these representations (Sohoni et al., 2020) or the ERM model’s outputs (Liu et al., 2021; Nam et al., 2020) can thus serve as (noisy) pseudolabels for spurious attributes. (2) Both representation metrics correlate with worst-group error, such that a viable way to improve worst-group performance is to improve representation alignment within each class. 4 CORRECT-N-CONTRAST (CNC) We now present CNC, a two-stage method to improve worst-group performance and robustness to spurious correlations, without requiring training group labels. Similar to prior works (Sohoni et al., 2020; Liu et al., 2021), our first stage trains an ERM model (with proper regularization1) on the training set, ultimately to infer group labels based on samples’ spurious attributes. 1As we train on the same data set we infer the groups on, regularization (via high weight decay or early stopping) is purely to prevent the ERM model from memorizing the class labels. This is standard practice also discussed in Sohoni et al. (2020); Liu et al. (2021). We show in Sec. 5.3 that we do not require the ERM model to perfectly learn the spurious attributes for CNC to substantially improve robustness in practice. Algorithm 1 Correct-N-Contrast (CNC) Input: Training data set (X,Y ); # positives M ; # negatives N ; learning rate η, # epochs K. Stage 1: ERM Training 1: Train a regularized ERM model fθ̂ on (X,Y ); save the predictions ŷi := fθ̂(xi). Stage 2: Supervised contrastive learning 2: for each epoch 1, . . . ,K do 3: for each anchor (x, y) ∈ (X,Y ) do 4: Let ŷ be the predicted (group) label of x from Stage 1’s ERM model. 5: Get M positives {(x+m, y+m)} where y+m = y but ŷ+m 6= ŷ, for m = 1, . . . ,M . 6: Get N negatives {(x−q , y−q )} where y−q 6= y but ŷ−q = ŷ, for q = 1, . . . , N . 7: Update fθ by θ ← θ − η · ∇L̂(fθ;x, y) (cf. Eq. (7)) with anchor, M positives, and N negatives. return final model fθ from Stage 2, and throw away the ERM model from Stage 1. The key difference is our second stage: we aim to train a more robust model by learning representations such that samples in the same class but different groups are close to each other. We use contrastive learning, as intuitively by treating samples with the same class but different spurious attributes as distinct “views” of the same class, we train the second stage model to “pull together” these samples’ representations and ignore the different spurious features. This is also inspired by Wang & Isola (2020); Robinson et al. (2021), who show that minimizing the contrastive loss improves representation alignment between distinct “views”. Later in Sec. 5.1, we verify that CNC indeed reduces L̂align(fθ; y) substantially. We include further details on both stages below, and summarize CNC in Algorithm 1. Stage 1: ERM training. We train an initial model fθ̂ on the training data set {(xi, yi)} n i=1 with ERM and regularization, and save its predictions {ŷi}ni=1 on the training data points. We consider two ways to source predictions: using the ERM model’s outputs, and clustering its last hidden-layer representations. Both approaches aim to accomplish the same goal of exploiting the ERM model’s learned spurious correlations; further details are in Appendix E.2. Stage 2: Contrastive learning (CL). Next, we train a robust model with supervised contrastive learning using the ERM predictions. While CNC is inspired by recent CL works (Chen et al., 2020; Khosla et al., 2020), we introduce new “contrastive batch” sampling and optimization objectives. Contrastive batch sampling. As described in Sec. 2, contrastive learning requires sampling anchors, positives, and negatives with the general form {x}, {x+}, {x−}. Here, we wish to sample points such that by maximizing the similarity between anchors and positives (and keeping anchors and negatives apart), the Stage 2 model “ignores” spurious similarities while learning class-consistent dependencies. With prediction set {ŷi}ni=1, for each batch we randomly sample an anchor xi ∈ X (with label yi and ERM prediction ŷi), M positives with the same class as yi but a different ERM model prediction than ŷi, and N negatives with different classes as yi but the same ERM model prediction as ŷi. For more signal per batch, we double pairwise comparisons by switching anchor and positive roles. Optimization objective and updating procedure. While our core objective is to learn aligned representations via contrastive learning, we also wish to train the full model to classify datapoints correctly. As we have the training class labels, we jointly update both the model’s encoder layers fenc with a standard contrastive loss, and the full model fθ with a cross-entropy loss: L̂(fθ;x, y) = λL̂supcon(fenc;x, y) + (1− λ)L̂cross(fθ;x, y). (7) In the above, L̂supcon(fenc;x, y) is the supervised contrastive loss of x along with its positive and negative samples, similar to Eq. (3) (see Eq. (16) in Sec. C.2 for the full equation); L̂cross(fθ;x, y) is averaged cross-entropy loss over x, the M positives, and the N negatives; λ ∈ [0, 1] is a balancing hyperparameter. As a remark, the loss objective (7) uses a single anchor in each batch in our setting. To calculate the loss, we first forward propagate one batch ( xi, {x+m}Mm=1, {x−q }Nq=1 ) through fenc and normalize them to obtain representation vectors ( zi, {z+m}Mm=1, {z−q }Nq=1 ) . To learn closely aligned zi and z+ for all {z+m}Mm=1, we update fenc with the L̂ sup out (x; fenc) loss. Finally, we also pass the unnormalized outputs of the encoder fenc to the classifier layers fcls, and compute a batch-wise cross-entropy loss L̂cross(fθ) using each batch sample’s class labels and fθ’s outputs. Due to space constraints, we include further implementation details and sampling considerations in Appendix C. 5 EXPERIMENTAL RESULTS We conduct experiments to answer the following questions: (1) Does CNC improve worst-group performance over prior state-of-the-art methods on data sets with spurious correlations? (2) Does CNC actually encourage learning hidden layer representations with greater alignment and class-labelonly dependence? How is this impacted by the strength of a spurious correlation in the data? (3) Does CNC require perfectly predicting the spurious attribute to work well in practice? Our results for each question follows in the next three subsections (5.1, 5.2, and 5.3). Due to space constraints, we defer ablations on CNC’s design choices, including the representation-learning objective and sampling procedure, to Appendix A. Additional comparison to alignment methods proposed for domain adaptation but adjusted for our setting are in Appendix A.2. Below, we briefly describe the benchmark data sets used in this section. We run CMNIST∗ with pcorr = 0.995. Further details on data sets, models, and experimental hyperparameters are deferred to Appendix E. Waterbirds (Sagawa et al., 2019): We classify Y = {waterbird, landbird}, where 95% of images have the same bird type and background A = {water background, land background}. CelebA (Liu et al., 2015): We classify celebrities’ hair colorY = {blond, not blond}withA = {male, female}. Only 6% of blond celebrities in the data set are male. CivilComments-WILDS (Borkan et al., 2019; Koh et al., 2021): We classify Y = {toxic, not toxic} comments. A denotes whether the comment mentions one of eight demographic identities. 5.1 CNC IMPROVES WORST-GROUP PERFORMANCE To study (1), we evaluate CNC on image classification and NLP data sets with spurious correlations. As baselines, we compare against standard ERM and an oracle GDRO approach that assumes access to the group labels. We also compare against recent methods that tackle spurious correlations without requiring group labels: CVaR DRO (Levy et al., 2020), GEORGE (Sohoni et al., 2020), Learning from Failure (LfF) (Nam et al., 2020), Predictive Group Invariance (PGI) (Ahmed et al., 2021), Environment Inference for Invariant Learning (EIIL) (Creager et al., 2021), Contrastive Input Morphing (CIM) (Taghanaki et al., 2021), and Just Train Twice (JTT) (Liu et al., 2021). We also compare against a CNC version without the Stage 1 ERM model, instead only sampling positives and negatives based on class (denoting this SupCon*). Results are reported in Table 1. CNC achieves highest worst-group accuracy among all methods without training group labels on the CMNIST∗ Waterbirds and CelebA data sets, while also obtaining near-SoTA worst-group accuracy on CivilComments. While LfF, GEORGE, PGI, EIIL, and JTT similarly use a trained ERM model to estimate groups, CNC uniquely uses ERM predictions to encourage the robust model to learn desirable representations via contrastive learning. We reason that with this approach, by sampling positives and negatives from the ERM predictions, CNC more directly encourages the robust model to ignore learnable spurious correlations compared to previous invariant learning, input transformation, or upweighting approaches. We include additional evidence of this via GradCAM visualizations in Appendix G. 5.2 CNC LEARNS REPRESENTATIONS LESS RELIANT ON SPURIOUS FEATURES To shed light on CNC’s worst-group accuracy gains, we investigate if models trained with CNC actually learn representations with higher alignment. Compared to ERM and JTT (the next-best performing method that does not require subgroup labels), CNC learns representations with significantly higher alignment (lower alignment loss) and lower mutual information with spurious attributes (while having comparable mutual information with class labels) (Fig. 5 and Fig. 7). We find that CNC representations exhibit the lowest alignment loss consistently for these data sets; this also corresponds to CNC models achieving the highest worst-group accuracy. Furthermore, while all methods result in representations that exhibit high mutual information with the class label (Fig. 5b), only CNC results in representations that drastically reduce mutual information with spurious attributes (Fig. 5c). In Fig. 6, we also illustrate this result on the Waterbirds data set via UMAP visualizations of the learned representations. Notably, all training methods result in representations separable by class label. Yet ERM models exhibit strong separability by spurious attributes, and JTT models interestingly also still depict some learned dependency on the spurious attribute. However, CNC uniquely learns representations that strongly depict class-label-only dependence. In addition, to study how this relation between representation metrics and worst-group accuracy scales with the strength of the spurious correlation, we compute representation metrics with CNC, ERM, and JTT models trained on increasingly spurious (↑ pcorr) CMNIST∗ data sets in Fig. 7. We observe that with high spurious correlations, ERM fails to classify digits in the minority classes, while CNC and JTT comparably maintain high worst-group accuracy. CNC also performs better in more spurious settings (pcorr > 0.95). These improvements over ERM are reflected by drops in alignment loss (averaged over classes); CNC consistently achieves lowest such loss. Fig. 7c shows that CNC’s learned representations maintain a more favorable balance of mutual information between the class label and spurious attribute than JTT. While JTT models exhibit slightly higher estimated I(Y ;Z) than CNC models, CNC models exhibit much lower dependence on the spurious attribute. 5.3 UNDERSTANDING CNC’S SENSITIVITY TO STAGE 1 PREDICTIONS Finally, we study how sensitive CNC is to how closely the Stage 1 ERM model actually predicts the spurious attribute. As JTT also relies on an initial ERM model’s predictions, we compare CNC to JTT in this regard. We find that CNC is more robust to noisy ERM predictions than JTT, and that CNC does not require perfectly inferred groups to perform well. We first conduct an ablation on CNC and JTT’s worst-group and average performance in Fig. 7d with the following synthetic experiment. On CMNIST∗, we start with the true spurious attribute labels as the Stage 1 “predictions". We then gradually degrade their quality as follows: for each point, with 6 RELATED WORK We build on prior work in group robustness and contrastive learning. Further discussion is in App. D. Robustness to group shift. A variety of approaches aim to improve performance on minority data groups. If group labels are known, many works minimize a rebalanced error similar in motivation to correcting class imbalance (He & Garcia, 2009; Cui et al., 2019) or importance weighting (Shimodaira, 2000; Byrd & Lipton, 2019). More recently, Sagawa et al. (2019) minimize worst-group loss during training. Goel et al. (2020) achieve further lift by synthetically generating additional minority group points. Cao et al. (2019) regularize updates on minority groups to improve their generalization. Another line of work aims to improve group robustness without assuming group labels for the training data. The most similar methods to CNC first train an initial ERM model with class labels as a way to infer groups, and then use these groups to train a second model with better worst-group performance. GEORGE (Sohoni et al., 2020) clusters ERM representations, and runs GDRO with these clusters as inferred groups. EIIL (Creager et al., 2021) and PGI (Ahmed et al., 2021) infer groups that maximally violate an invariance objective for the ERM model. With these groups EIIL uses either GDRO or Invariant Risk Minimization (Arjovsky et al., 2019) to train a second robust model, while PGI minimizes the KL divergence of the softmaxed logits for samples in the same class but different groups. LfF (Nam et al., 2020) use a generalized cross-entropy loss to encourage misclassifying minority groups, concurrently training a second model with these datapoints upweighted. JTT (Liu et al., 2021) trains via ERM for a few epochs, before training a second ERM model with incorrect datapoints upsampled. For image data sets, CIM (Taghanaki et al., 2021) trains a transformation network to remove potentially spurious attributes from input features. Contrastive learning (CL). CL works by predicting whether two inputs are “similar” or “dissimilar” (Le-Khac et al., 2020). This involves specifying batches of anchor and positive datapoints similar to each other (as different “views” of the same source or input), and negatives depicting dissimilar points. An encoder is trained to simultaneously maximize the similarity between the feature representations of anchors and positives, and minimize similarity between anchor and negative representations. In unsupervised CL, “negatives” are often sampled uniformly (Bachman et al., 2019), while “positives” are different views of the same object, e.g. via data augmentation (Chen et al., 2020). In supervised CL, negatives are different-class points and positives are same-class points (Khosla et al., 2020). In CNC, we instead treat same-class points with different ERM predictions as positives, and differentclass points with the same ERM prediction as negatives. This naturally provides “hard negative mining,” a challenge for standard CL (Robinson et al., 2021; Wu et al., 2021; Chuang et al., 2020). 7 CONCLUSION We present CNC, a two-stage CL approach to learn representations robust to spurious correlations. We theoretically analyze the connection between alignment and worst-group vs. average-group losses, and show that CNC achieves SOTA or near-SOTA worst-group accuracy across several benchmarks. ETHICS STATEMENT We hope that our work is another step towards the important goal of making machine learning models more fair and robust. However, while our work successfully improves worst-group accuracy, this is not necessarily an end-all be-all metric - other fairness-based metrics may be more suitable in certain settings. Also, misuse of metrics could lead to potential harm. To avoid these pitfalls, it is important for practitioners to understand the limitations and tradeoffs of different metrics, including when applying methods such as ours. REPRODUCIBILITY STATEMENT We have submitted our code as part of the supplementary materials. The datasets we use are publicly available (with the exception of CMNIST∗ which is a modification of the standard MNIST dataset (LeCun et al., 2010); our code to generate this modified dataset is also included). In addition to the details provided in Section 5, further implementation, dataset, and experimental details can be found in Appendix E. For the theory, we include complete proofs of all claims in Appendix B. A ADDITIONAL BENCHMARK COMPARISONS AND ABLATIONS In this section, we include further experiments comparing CNC against additional related methods. We also include additional ablations to study the importance of CNC’s presented design choices. A.1 COMPARISON TO MINIMIZING THE ALIGNMENT LOSS DIRECTLY In Sec. 5.1 and Sec. 5.2, we empirically showed that CNC’s contrastive loss and hard positive and negative sampling lead to improved worst-group accuracy and greater representation alignment. We now study how CNC performs if instead of the contrastive loss, we train the Stage 2 model to minimize Lalign directly. With this objective, we aim to minimize the Euclidean distance between samples in different inferred groups but the same class. We keep all other components of CNC consistent, and apply Lalign to the anchor and positive samples in each contrastive batch. We report results on CMNIST∗, Waterbirds, and CelebA in Table A.1. We find that CNC with the default contrastive loss outperforms CNC with the alignment loss. We reason that an advantage of the contrastive loss (and specifically the “hard” positive and negative samples), is that it encourages aligning samples with the same class label but different spurious features, and pushes apart hard negative samples with different class labels but similar spurious features. This provides additional signal for improving separation between the different classes, so the robust model only learns to rely on ground-truth-specific features for discriminating between datapoints. On the other hand, the Lalignment objective does not incorporate these hard negatives. A.2 COMPARISON TO REPRESENTATION ALIGNMENT METHODS FOR DOMAIN GENERALIZATION AND ADAPTATION While our main results in Table 1 compare against methods designed to tackle the spurious correlations setting presented in Section 5.1, we now study how CNC fares against existing representation alignment methods proposed in the domain generalization (DG) and unsupervised domain adaptation (UDA) literature. At a high level, a popular idea in DG and UDA is to learn similar representations for datapoints with the same class but sampled from different domains, e.g. via adversarial training to prevent another model from classifying representations’ source domains correctly (Ganin et al., 2016), or minimizing representation differences via metrics such as maximum mean discrepancy (MMD) (Li et al., 2018). While DG and UDA carry distinct problem settings and assumptions from our spurious correlations setting (c.f. Appendix D.4), we aim to understand if existing representation alignment methods can train models robust to spurious correlations, and compare their performance with CNC. We first explain our protocol for evaluating these methods, and then discuss results. We carry out our evaluation with domain-adversarial neural networks (DANN) Ganin et al. (2016), a seminal UDA method that aims to learn aligned representations across two domains. To do so, DANN jointly trains a model to classify samples from a “source” domain while preventing a separate “domain classifier” module from correctly classifying the domain for datapoints sampled from both domains. For fair comparison, we use the same ResNet-50 backbone as in CNC, and make several adjustments to the typical DANN and UDA procedure: 1. While UDA assumes that the data is organized into “source” and “target” domains, we do not have domain labels. We thus infer domains using the predictions of an initial ERM model as in CNC. 2. The notion of a domain may also be ambiguous with respect to the groups defined in Section 2. For example, domains may be defined by spurious attributes (e.g., for the Waterbirds dataset, we may consider the “water background” domain and the “land background” domain). Domains may alternatively be defined by whether samples carry dominant spurious correlations or not (e.g., the “majority group” domain and the “minority group” domain). We train and evaluate separate DANN models for both interpretations. We infer the former by the predicted class of the initial ERM model. We infer the latter by whether the initial ERM model is correct or not. 3. Finally, UDA aims to train with a class-labeled “source” domain and an unlabeled “target” domain such that a model performs well on unseen samples from the specified “target” domain (Ganin et al., 2016). However, our benchmarks have class labels for all training points, and do not have a notion of “source” and “target” domains (we aim to obtain high worst-group accuracy, which could fall under any domain). We thus assume access to labels for all domains. During training, the goal for our DANN models is to correctly classify samples from both domains, while learning representations such that a jointly trained domain classifier module cannot determine the samples’ domains from their representations alone. At test-time, we evaluate the DANN model on the entire test set for each benchmark, and report the worst-group and average accuracies. In Table A.2, we report the worst-group and average accuracies of DANN on the Waterbirds and CelebA datasets across three seeds along with the CNC results. Our results suggest that the domain alignment in DANN is not sufficient to improve worst-group accuracy. We hypothesize this is due to adversarial training with the domain classifier aligning representations without regard to different classes within each domain. Due to the propensity of samples exhibiting spurious correlations, DANN models may thus still learn to rely on these correlations. A.3 IMPORTANCE OF ERM-GUIDED CONTRASTIVE SAMPLING In this section we conduct additional ablations on the sampling procedure in CNC. Although CNC relies on an initial trained ERM model’s predictions, can we still improve worst-group accuracy without this step and with supervised contrastive learning alone, i.e. by sampling positives uniform randomly from all datapoints with the same label as the anchor? In Table 1, we showed that this approach (denoted SupCon∗) led to a drop in worst-group accuracy. Taking this question further, while we use the Stage 1 ERM model’s predictions to sample “hard” negatives with different groundtruth classes and the same ERM predictions as their anchors—such that to reduce the contrastive loss and learn dissimilar representations for anchors and negatives, the Stage 2 contrastive model must thus learn to ignore spurious features that the initial ERM model learns to depend on—how does CNC’s performance fare with alternative negative sampling procedures? Keeping the anchor and positive sampling consistent, we perform additional ablations where we either sample negatives only by having different classes as their anchors, or sample negatives only be having the same ERM model prediction as their anchors. We report these results in Table A.3 below. We find that the default CNC sampling procedure obtains highest worst-group accuracy and highest or near-highest average accuracy compared to alternative strategies across the CMNIST∗, Waterbirds, and CelebA datasets. The results suggests that inferring the spurious attributes (e.g. via an initial ERM model) is important for CNC, and that CNC benefits from using these predictions for sampling both negatives and positives. We reason this is because without this sampling, we can actually encourage the Stage 2 model to rely on spurious correlations. For example, if we just ensure that the anchor and negative samples have different classes, then the contrastive model may just rely on the different spurious features of the anchors and negatives to learn dissimilar representations. However, by ensuring that the anchors and negatives have similar spurious features (via the same trained ERM model prediction), the contrastive model is forced to rely on non-spurious features to learn dissimilar Negatives by different class 66.4 (5.1) 86.0 (1.6) 82.2 (0.8) 88.9 (0.3) 79.2 (0.3) 88.0 (0.1) Negatives by same prediction 70.0 (5.1) 87.1 (1.1) 85.7 (1.3) 90.3 (0.2) 81.1 (1.4) 88.5 (0.3) SupCon∗ 0.0 (0.0) 22.4 (1.2) 71.0 (1.9) 85.9 (0.8) 62.2 (1.1) 90.0 (0.1) CNC (default) 77.4 (3.0) 90.9 (0.6) 89.7 (0.2) 90.8 (0.1) 88.8 (0.9) 89.9 (0.5) representations for the samples. The same logic applies for learning similar representations for anchor and positive samples. We suspect that choosing negatives from all samples with the same ERM prediction as their anchors performs better than the other ablations as it alone does not encourage learning spurious correlations: the model is asked to “pull apart” samples with the same spurious features, and so must ignore spurious similarities to recognize something different between anchors and negatives. However, this ablation does not ensure that anchor-negative pairs consist of different classes (which our full method does), so the model gets less signal to separate samples by class. A.4 ADDITIONAL DESIGN CHOICE ABLATIONS We first summarize CNC’s design choices and differences from standard supervised contrastive learning in Appendix A.4.1. We then empirically validate each component in Appendix A.4.2. A.4.1 SUMMARY OF CNC DESIGN CHOICES AND PROPERTIES No projection network. As we wish to learn data representations that maximize the alignment between anchor and positive datapoints, we do not compute the contrastive loss with the outputs of an additional nonlinear projection network. This is inspired by the logic justifying a projection head in prior contrastive learning, e.g. SimCLR (Chen et al., 2020), where the head is included because the contrastive loss trains representations to be “invariant to data transformation” and may encourage removing information “such as the color or orientation of objects”. In our case, we view inferred datapoints with the same class but different spurious attributes as “transformations” of each other, and we hypothesize that removing these differences can help us improve worst-group performance. Two-sided contrastive sampling. To incorporate additional comparisons between datapoints that only differ in spurious attribute during training, we employ “two-sided” contrastive batch sampling. This lets us equally incorporate instances where the second contrastive model in CNC treats datapoints that the initial ERM model got incorrect and correct as anchors. Additional intrinsic hard positive/negative mining. Because the new model corrects for potentially learned spurious correlations by only comparing and contrasting datapoints that differ in class label or spurious attribute, but not both (as dictated by the initial ERM model’s outputs), the contrastive batches naturally carry “hard” positives and negatives. Thus, our approach provides a natural form of hard negative mining (in addition to the intrinsic hard positive / negative mining at the gradient level with InfoNCE-style contrastive losses (Chen et al., 2020; Khosla et al., 2020)) while avoiding class collisions, two nontrivial challenges in standard self-supervised contrastive learning (Robinson et al., 2021; Wu et al., 2021; Chuang et al., 2020). Joint training of encoder and classifier layers. CNC can train any standard classification model architecture; for any given neural network we just apply different optimization objectives to the encoder and classifier layers. We train both the encoder and classifier layers with a cross-entropy loss, and jointly train the encoder layer with a supervised contrastive loss. For the encoder layers, we balance the two objectives with a hyperparameter λ (c.f. Eq. 7). A.4.2 EMPIRICAL VALIDATION OF CNC COMPONENTS To validate the additional algorithmic components of CNC, we report how CNC performs on the Waterbirds dataset when modifying the individual design components. We use the same hyperpa- rameters as in the main results, and report accuracies as the average over three training runs for the following ablations. Table A.4 summarizes that across these design ablations, default CNC as presented consistently outperforms these alternative implementations. No projection head. We incorporate a nonlinear projection head as is typical in prior contrastive learning works (Chen et al., 2020), that maps the encoder output to lower-dimensional representations (from 2048 to 128 in our case). We then update the encoder layers and the projection head jointly by computing the contrastive loss on the projection head’s output, still passing the encoder layer’s direct outputs to the classifier to compute the cross-entropy loss. We note that using the projection head decreases worst-group accuracy substantially. We reason that as previously discussed, while using the projection head in prior work can allow the model to retain more information in its actual hidden layers (Chen et al., 2020), in our case to remove dependencies on spurious attributes we actually want to encourage learning invariant representations when we model the differences between anchor and positive datapoints as due to spurious attributes. Two-sided contrastive batches. Instead of “two-sided” contrasting where we allow both sampled anchors and positives to take on the anchor role, for each batch we only compute contrastive updates by comparing original positives and negatives with the original anchor. When keeping everything else the same, we find that just doing these one-sided comparisons also leads to a drop in performance for worst-group accuracy. This suggests that the increased number of comparisons and training setup where we swap the roles of anchors and positives of the two-sided batches introduces greater contrastive learning signal. Additional intrinsic hard positive/negative mining. We discuss this ablation in Section A.3. Joint training of encoder and classifier layers. Instead of training the full model jointly, we first only train the encoder layers with the contrastive loss in CNC, before freezing these layers and finetuning the classifier layers with the cross-entropy loss. With this implementation, we also obtain noticeable drop in performance. While we leave further analysis for the joint cross-entropy and contrastive optimization for future work, one conjecture is that the cross-entropy loss may aid in learning separable representations while also training the full model to keep the average error small. From our theory, the contrastive loss can help bound the gap between worst-group and average error. Thus we try to minimize average error in the same parameter update. This also follows prior work, where updating the entire model and finetuning all model parameters instead of freezing the encoder layers leads to higher accuracy (Chen et al., 2020). However, we found that with an initial encoder-only training stage, if we did not freeze the trained layers the fine-tuning on a dataset with spurious correlations would “revert” the contrastive training, resulting in a large gap between worst-group and average error similar to ERM. We also ablate the balancing hyperparameter λ of CNC on CMNIST∗. In Table A.5 we find that CNC consistently achieves high worst-group accuracy across a wide range of λ ∈ [0.4, 0.9]. For reference, the next best methods GEORGE and JTT obtain 76.4% and 74.5% worst-group accuracy. B OMITTED PROOFS FROM SECTION 3.2 In this section, we prove that within any class, the gap between the worst-group error and the average error can be upper bounded by the alignment loss times the Lipschitz constant, plus another concentration error term. Proof of Theorem 3.1. Consider two arbitrary groups, denoted by g1 = (y, a1) and g2 = (y, a2), whose class labels are both y ∈ Y , whose spurious attributes are a1 ∈ A and a2 ∈ A such that a1 6= a2. Let G1 and G2 be the subset of training data that belong to groups g1 and g2, respectively. We note that both G1 and G2 are non-empty since we have assumed that (in Section 2) there is at least one sample from each group in the training data set. Let ng1 = |G1| and ng2 = |G2| be the size of these two groups, respectively. Recall that fenc denotes the mapping of the encoder layers of the full neural network model fθ. Since the classification layer fcls is a linear layer, we have used W to denote the weight matrix of this layer. Our definition of the cross-group alignment loss in equation (5), denoted as L̂align(fθ; y), implies that for g1 and g2, 1 ng1 1 ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 ‖fenc(x)− fenc(x′)‖2 ≤ L̂align(fθ; y). (8) Next, let E(x,y,a1)∼Pg1 [Lavg(Wfenc(x), y)] be the average loss conditioning on a data point being sampled from group g1 (and similarly for group g2). Let ∆(g1, g2) be the difference between the population average losses: ∆(g1, g2) = ∣∣∣∣∣ E(x,y,a1)∼Pg1 [Lavg(Wfenc(x), y]− E(x,y,a2)∼Pg2 [Lavg(Wfenc(x), y)] ∣∣∣∣∣. Recall that Gy ⊆ G is the set of groups that have class label y. Since the loss `(·) is bounded above by some fixed constant C2 according to our assumption, and is at least zero, by the Hoeffding’s inequality, the following result holds with probability at least 1− δ, for all |Gy| groups g ∈ Gy ,∣∣∣∣∣∣ E(x,y,a)∼Pg [Lavg(Wfenc(x), y)]− 1ng ∑ (x,y)∈(X,Y ) `(Wfenc(x), y) ∣∣∣∣∣∣ ≤ C2 √ 2 log (|Gy| /δ) ng . (9) Thus, with probability at least 1 − δ, the following holds for any g1 and g2 in class y (but having different spurious attributes) ∆(g1, g2) ≤ ∣∣∣∣∣∣ 1ng1 ∑ (x,y,a1)∈G1 Lavg(Wfenc(x), y)− 1 ng2 ∑ (x′,y,a2)∈G2 Lavg(Wfenc(x′), y) ∣∣∣∣∣∣ (10) + C2 (√ 2 log(|Gy| /δ) ng1 + √ 2 log(|Gy| /δ) ng2 ) . Next, we focus on the RHS of equation (10). First, equation (10) is also equal to the following:∣∣∣∣∣∣ 1ng1 1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 `(Wfenc(x), y))− 1 ng1 1 ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 `(Wfenc(x ′), y)) ∣∣∣∣∣∣ . Since we have also assumed that the loss function `(x, y) is C1-Lipschitz in x2, the above is at most:∣∣∣∣∣∣ 1ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 |`(Wfenc(x), y)− `(Wfenc(x′), y)| ∣∣∣∣∣∣ ≤ 1 ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 C1 · ‖Wfenc(x)−Wfenc(x′)‖2 (since y is the same for x, x′) ≤ B ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 C1 · ‖fenc(x)− fenc(x′)‖2 (because ‖W‖2 ≤ B as assumed) ≤B · C1 · L̂align(fθ; y). (because of equation (8)) 2In other words, we assume that |`(z, y)− `(z′, y)| ≤ C1 · ‖z − z′‖2, for any z, z′ and y. Thus, we have shown that for any g1 and g2 within class y, ∆(g1, g2) ≤ B · L̂align(fθ; y) + (√ 2 log(|Gy| /δ) ng1 + √ 2 log(|Gy| /δ) ng2 ) ≤ B · C1 · L̂align(fθ; y) + max g∈Gy C2 · √ 8 log(|Gy| /δ) ng . (11) Finally, we use the above result to bound the gap between the worst-group loss and the average loss. For every group g ∈ G, let pg denote the prior probability of observing a sample from P in this group. Let qy = ∑ g′∈Gy pg′ . Let h(g) be a short hand notation for h(g) = E (x,y,a)∼Pg [Lavg(Wfenc(x), y)] . The average loss among the groups with class label y is Lavg(fθ; y) = ∑ g∈Gy pg qy h(g). The worstgroup loss among the groups with class label y is Lwg(fθ; y) = maxg∈Gy h(g). Let g? be a group that incurs the highest loss among groups in Gy . We have Lwg(fθ; y)− Lavg(fθ; y) is equal to h(g?)− ∑ g∈Gy pg qy h(g) = ∑ g∈Gy pg qy (h(g?)− h(g)) (12) ≤ ∑ g∈Gy pg qy ∆(g?, g) (13) ≤B · C1 · L̂align(fθ; y) + max g∈Gy C2 · √ 8 log(|G| /δ) ng . (14) The last step uses equation (11) on ∆(g?, g) and the fact that qy = ∑ g′∈Gy pg′ . Thus, we have shown that the gap between the worst-group loss and the average loss among the groups with the same class label is bounded by the above equation. The proof is now complete. The astute reader will note that Theorem 3.1 focuses on comparing groups within the same class y, for any y ∈ Y . A natural follow-up question is what happens when comparing across groups with different labels. Let Lwg(fθ) = maxy∈Y Lwg(fθ; y) be the worst-group loss across all the labels. Recall that Lavg(fθ) is the average loss for the entire population of data. We generalize Theorem 3.1 to this setting in the following result. Corollary B.1 (Extension of Theorem 3.1 to compare across different classes). In the setting of Theorem 3.1, let qy = ∑ g∈Gy pg be the prior probability of observing a sample drawn from P with label y, for any y ∈ Y . We have that with probability at least 1− δ, the following holds: Lwg(fθ) ≤ ( min y∈Y qy )−1 Lavg(fθ) +B · C1 ·max y∈Y L̂align(fθ; y) + max g∈G C2 · √ 8 log(|G| /δ) ng . (15) Proof. We generalize the argument in the previous result to compare across different labels. The worst-group loss across different labels is max y∈Y max g∈Gy h(g) ≤max y∈Y ∑ g∈Gy pg qy h(g) +B · C1L̂align(fθ; y) + max g∈Gy C2 √ 8 log(|Gy| /δ) ng (because of equation (14)) ≤ 1 miny∈Y qy ∑ g∈Gy pgh(g) +B · C1 max y∈Y L̂align(fθ; y) + max g∈G C2 √ 8 log(|G| /δ) ng . Since ∑ g∈G pgh(g) = Lavg(fθ), we thus conclude that Lwg(fθ) ≤ ( min y∈Y qy )−1 Lavg(fθ) +B · C1 max y∈Y L̂align(fθ; y) + max g∈G C2 √ 8 log(|G| /δ) ng . The proof is now complete. An example showing that Corollary B.1 is tight. We describe a simple example in which the factor( miny∈Y qy )−1 in equation (15) is tight (asymptotically). Suppose there are k perfectly balanced classes so that qy = 1/k, for every y ∈ Y . There is one data point from each class, with loss equal to 0 for all except one of them. The worst-group loss is 1 whereas the average loss is 1/k. Thus, there is a factor of k between the worst-group loss and the average loss. For equation (15), the factor( min y∈Y qy )−1 = k, since qy = 1/k for every y ∈ Y in this example. Thus, this factor matches the (multiplicative) factor between the worst-group loss and the average loss in this example. C CONTRASTIVE ALGORITHM DESIGN DETAILS In this section, we provide further details on the training setup and contrastive batch sampling, pseudocode, and additional properties related to CNC’s implementation. C.1 TRAINING SETUP In Fig. 8, we illustrate the two training stages of Correct-N-Contrast described in Sec. 4. In Stage 1, we first train an ERM model with a cross-entropy loss. For consistency with Stage 2, we depict the output as a composition of the encoder and linear classifier layers. Then in Stage 2, we train a new model with the same architecture using contrastive batches sampled with the Stage 1 ERM model and a supervised contrastive loss (3) (which we compute after the depicted representations are first normalized) to update the encoder layers. Note that unlike prior work in contrastive learning (Chen et al., 2020; Khosla et al., 2020), as we have the class labels of the anchors, positives, and negatives, we also continue forward-passing the unnormalized representations (encoder layer outputs) and compute a cross-entropy loss to update the classifier layers while jointly training the encoder. 2048-D 2-D We also note that unlike prior work, we wish to learn invariances between anchors and positives that maximally reduce the presence of features not needed for classification. We thus do not pass the representations through an additional projection network (Chen et al., 2020). Instead, we use Eq. 3 to compute the supervised contrastive loss directly on the encoder outputs z = fenc(x). In Appendix A.4.2, we studied ablations with both design choices. C.2 TWO-SIDED CONTRASTIVE BATCH IMPLEMENTATION We provide more details on our default contrastive batch sampling approach described in Sec. 4. To recall, for additional contrastive signal per batch, we can double the pairwise comparisons in a training batch by switching the anchor and positive roles. This is similar to the NT-Xent loss in prior contrastive learning work (Chen et al., 2020). We switch the role of the anchor and first positive sampled in a contrastive batch, and sample additional positives and negatives using the same guidelines but adjusting for the “new” anchor. We denote this as “two-sided” sampling in contrast with the “one-sided” comparisons we get with just the original anchor, positives, and negatives. Implementing this sampling procedure in practice is simple. First, recall our initial setup with trained ERM model fθ̂, its predictions {ŷi} n i=1 on training data {(xi, yi)}ni=1 (where ŷi = fθ̂(xi)), and number of positives and negatives to sample M and N . We then sample batches with Algorithm 2. Because the initial anchors are then datapoints that the ERM model gets correct, under our heuristic we infer {xi}Mi=1 as samples from the majority group. Similarly the M positives {x+m}Mm=1 and N negatives {x−n }Nn=1 that it gets incorrect are inferred to belong to minority groups. For one batch, we then compute the full contrastive loss with L̂supcon(fenc) = L̂supcon ( x1, {x+m}Mm=1, {x−n }Nn=1; fenc ) + L̂supcon ( x+1 , {xi}Mi=1, {x′−n }Nn=1; fenc ) (16) where L̂supcon ( x1, {x+m}Mm=1, {x−n }Nn=1; fenc ) is given by: − 1 M M∑ m=1 log exp(z>1 z + m/τ)∑M m=1 exp(z > 1 z + m/τ) + ∑N n=1 exp(z > 1 z + n /τ) (17) Algorithm 2 Sampling two-sided contrastive batches Require: Number of positives M and number of negatives N to sample for each batch. 1: Initialize set of contrastive batches B = {} 2: for each xi ∈ {xi ∈ X : ŷi = yi} do 3: Sample M − 1 additional “anchors” to obtain {xi}Mi=1 from {xi ∈ X : ŷi = yi} 4: Sample M positives {x+m}Mm=1 from {x−m ∈ X : ŷ−m = ŷi, y−m 6= yi} 5: Sample N negatives {x−n }Nn=1 from {x−n ∈ X : ŷ−n = ŷi, y−n 6= yi} 6: Sample N negatives {x′−n }Nn=1 from {x′−n ∈ X : ŷ′−n = ŷ+1 , y′−n 6= y + 1 } 7: Update contrastive batch set: B ← B ∪ ( {xi}Mi=1, {x+m}Mm=1, {x−n }Nn=1, {x′−n }Nn=1 ) and again let z be the normalized output fenc(x) for corresponding x. We compute the cross-entropy component of the full loss for each x in the two-sided batch with its corresponding label y. D FURTHER RELATED WORK DISCUSSION We provide additional discussion of related work and connections to our work below. D.1 IMPROVING ROBUSTNESS TO SPURIOUS CORRELATIONS Our core objective is to improve model robustness to group or subpopulation distribution shifts that arise from the presence of spurious correlations, specifically for classification tasks. Because these learnable correlations hold for some but not all samples in a dataset, standard training with ERM may result in highly variable performance: a model that classifies datapoints based on spurious correlations does well for some subsets or “groups” of the data but not others. To improve model robustness and avoid learning spurious correlations, prior work introduces the goal to maximize worst-group accuracy (Sagawa et al., 2019). Related works broadly fall under two categories: Improving robustness with group information. If information such as spurious attribute labels is provided, one can divide the data into explicit groups as defined in Sec. 2, and then train to directly minimize the worst group-level error among these groups. This is done in group DRO (GDRO) (Sagawa et al., 2019), where the authors propose an online training algorithm that focuses training updates over datapoints from higher-loss groups. Goel et al. (2020) also adopt this approach with their method CycleGAN Augmented Model Patching (CAMEL). However, similar to our motivation, they argue that a stronger modeling goal should be placed on preventing a model from learning group-specific features. Their approach involves first training a CycleCAN (Zhu et al., 2017) to learn the data transformations from datapoints in one group to another that share the same class label. They then apply these transformations as data augmentations to different samples, intuitively generating new versions of the original samples that take on group-specific features. Finally they train a new model with a consistency regularization objective to learn invariant features between transformed samples and their sources. Unlike their consistency loss, we accomplish a similar objective to learn group-invariant features with contrastive learning. Our first training stage is also less expensive. Instead of training a CycleGAN and then using it to augment datapoints, we train a relatively simple standard ERM classification model, sometimes with only a few number of epochs, and use its predictions to identify pairs of datapoints to serve a similar purpose. Finally, unlike both CAMEL and GDRO, we do not require spurious attribute or group labels for each training datapoints. We can then apply CNC in less restrictive settings where such information is not known. Related to GDRO are methods that aim to optimize a "Pareto-fair" objective, more general than simply the worst-case group performance. Notable examples are the works of Balashankar et al. (2019) and Martinez et al. (2020). However, these approaches similarly do not directly optimize for good representation alignment (unlike our work). Improving robustness without training group information. More similar to our approach are methods that do not assume group information at training time, and only require validation set spurious attribute labels for fine-tuning. As validation sets are typically much smaller in size than training sets, an advantage of CNC and comparable methods is that we can improve the accessibility of robust training methods to a wider set of problems. One popular line of work is distributionally robust optimization (DRO), which trains models to minimize the worst loss within a ball centered around the observed distribution (Ben-Tal et al., 2013; Wiesemann et al., 2014; Duchi & Namkoong, 2019; Levy et al., 2020; Curi et al., 2020; Oren et al., 2019). This includes the CVaR DRO (Levy et al., 2020) method we evaluate against. However, prior work has shown that these approaches may be too pessimistic, optimizing not just for worst-group accuracy but worst possible accuracy within the distribution balls (Sagawa et al., 2019), or too undirected, optimizing for too many subpopulations, e.g. by first upweighting minority points but then upweighting majority points in later stages of training (Liu et al., 2021). Pezeshki et al. (2020) instead suggest that gradient starvation (GS), where neural networks only learn to capture statistically dominant features in the data (Combes et al., 2018), is the main culprit behind learning spurious correlations, and introduce a “spectral decoupling” regularizer to alleviate GS. However this does not directly prevent models from learning dependencies on spurious attributes. Similar to CAMEL, Taghanaki et al. (2021) propose Contrastive Input Morphing (CIM), an image dataset-specific method that aims to learn input feature transformations that remove the effects of spurious or task-irrelevant attributes. They do so without group labels, training a transformation network with a triplet loss to transform input images such that a given transformed image’s structural similarity metric (based on luminance, contrast, and structure (Wang et al., 2003)) is more similar to a “positive” image from the same class than a “negative” image from a different class. They then train a classifier on top of these representations. Instead of pixel-level similarity metrics, CNC enforces similarity in a neural network’s hidden-layer representations, allowing CNC to apply to non-image modalities. Additionally, we sample positives and negatives not just based on class label, but also the learned spurious correlations of an ERM model (via its trained predictions). We hypothesize that our sampling scheme, which intuitively provides "harder" positive and negative examples, allows CNC to more strongly overcome spurious correlations. Most similar to our approach are methods that first train an initial ERM model with the class labels as a way to identify data points belonging to minority groups, and subsequently train an additional model with greater emphasis on the estimated minority groups. Sohoni et al. (2020) demonstrate that even when only trained on the class labels, neural networks learn feature representations that can be clustered into groups of data exhibiting different spurious attributes. They use the resulting cluster labels as estimated group labels before running GDRO on these estimated groups. Meanwhile, Nam et al. (2020) train a pair of models, where one model minimizes a generalized cross-entropy loss (Zhang & Sabuncu, 2018), such that the datapoints this model classifies incorrectly largely correspond to those in the minority group. They then train the other model on the same data but upweight the minority-group-estimated points. While they interweave training of the biased and robust model, Liu et al. (2021) instead train one model first with a shortened training time (but the standard cross-entropy objective), and show that then upsampling the incorrect data points and training another model with ERM can yield higher worst-group accuracy. Creager et al. (2021) first train an ERM model, and then softly assign the training data into groups under which the initial trained ERM model would maximally violate the invariant risk minimization (IRM) objective. In particular, the IRM objective is maximally satisfied if a model’s optimal classifier is the same across groups (Arjovsky et al., 2019), and EIIL groups are inferred such that the initial ERM model’s representations exhibit maximum variance within each group. Finally, Nagarajan et al. (2020) provides a theoretical understanding of how ERM picks up spurious features under data set imbalance. They consider a setting involve a single spurious feature that is correlated with the class label and analyze the max-margin classifier in the presence of this spurious feature. In our work, we demonstrate that the ERM model’s predictions can be leveraged to not only estimate groups and train a new model with supervised learning but with different weightings. Instead, we can specifically identify pairs of points that a contrastive model can then learn invariant features between. Our core contribution comes from rethinking the objective with a contrastive loss that more directly reduces the model’s ability to learning spurious correlations. D.2 CONTRASTIVE LEARNING Our method also uses contrastive learning, a simple yet powerful framework for both self-supervised (Chen et al., 2020; Oord et al., 2018; Tian et al., 2019; Song & Ermon, 2020; Sermanet et al., 2018; Hassani & Khasahmadi, 2020; Robinson et al., 2021) and supervised (Khosla et al., 2020; Gunel et al., 2021) representation learning. The core idea is to learn data representations that maximize the similarity between a given input “anchor” and distinct different views of the same input (“positives”). Frequently this also involves contrasting positives with “negative” data samples without any assumed relation to the anchor (Bachman et al., 2019). Core components then include some way to source multiple views, e.g. with data transformations (Chen et al., 2020), and training objectives similar to noise contrastive estimation (Gutmann & Hyvärinen, 2010; Mnih & Kavukcuoglu, 2013). An important component of contrastive learning is the method by which appropriate positives and negatives are gathered. For sampling positives, Chen et al. (2020) show that certain data augmentations (e.g. crops and cutouts) may be more beneficial than others (e.g. Gaussian noise and Sobel filtering) when generating anchors and positives for unsupervised contrastive learning. von Kügelgen et al. (2021) theoretically study how data augmentations help contrastive models learn core content attributes which are invariant to different observed “style changes”. They propose a latent variable model for self-supervised learning. Tian et al. (2020) further study what makes good views for contrastive learning. They propose an “InfoMin principle”, where anchors and positives should share the least information necessary for the contrastive model to do well on the downstream task. For sampling negatives, Robinson et al. (2021) show that contrastive learning also benefits from using “hard” negatives, which (1) are actually a different class from the anchor (which they approximate in the unsupervised setting) and (2) embed closest to the anchor under the encoder’s current data representation. Both of these approaches capture the principle that if positives are always too similar to the anchor and negatives are always too different, then contrastive learning may be inefficient at learning generalizable representations of the underlying classes. In our work, we incorporate this principle by sampling data points with the same class label but different ERM predictions–presumably because of spurious attribute differences–as anchor and positive views, while sampling negatives from data points with different class labels but the same ERM prediction as the anchor. The anchors and positives are different enough that a trained ERM model predicted them differently, while the anchors and negatives are similar enough that the trained ERM model predicted them the same. Contrasting the above then allows us to exploit both “hard” positive and negative criteria for our downstream classification task. In Appendix A.3, we show that removing this ERM-guided sampling (i.e. only sampling positives and negatives based on class information), as well as trying different negative sampling procedures, leads to substantially lower worst-group accuracy with CNC. One limitation of our current theoretical analysis regarding the alignment loss (cf. Section 3.2) is that we require knowing the group labels to compute the RHS of equation (6) (in particular, the alignment loss). An interesting question for future work is to provide a better theoretical understanding of the alignment induced by CNC in the context of spurious correlations. D.3 LEARNING INVARIANT REPRESENTATIONS Our work is also similar in motivation to Invariant Risk Minimization (IRM) (Arjovsky et al., 2019), Predictive Group Invariance (PGI) (Ahmed et al., 2021), and other related works in domain-invariant learning (Krueger et al., 2020; Parascandolo et al., 2020; Ahuja et al., 2020; Creager et al., 2021). These methods aim to train models that learn a single invariant representation that is consistently optimal (e.g. with respect to classifying data) across different domains or environments. These environments can be thought of as data groups, and while traditionally methods such as IRM require that environment labels are known, recent approaches such as Environment Inference for Invariant Learning (EIIL) (Creager et al., 2021) and Predictive Group Invariance (PGI) (Ahmed et al., 2021) similarly aim to infer environments with an initial ERM model. In EIIL, they next train a more robust model with an invariant learning objective, similarly selecting models based on the worst-group error on the validation set. However, they train this model using IRM or Group DRO with the inferred environments as group labels
1. What is the focus of the paper regarding model robustness and group shifts? 2. What are the strengths of the proposed approach, particularly in its empirical performance? 3. What are the weaknesses of the paper, especially regarding the assumption in Theorem 3.1 and the lack of discussion on related work? 4. Do you have any concerns about the two-stage method and its components, including the use of ERM prediction and contrastive part? 5. Are there any additional questions or comments you have regarding the paper's content, such as the discrepancy in the gap calculation between CNC and GDRO?
Summary Of The Paper Review
Summary Of The Paper This paper focuses on improving the model robustness to group shifts without prior group information and bridges the gap to those methods with access to group labels (i.e., GDRO). It identifies the relation between the worst-group performance and representation alignment both empirical and theoretically, which motivates a contrastive approach for improving representation alignment and robustness. Empirically, the proposed method demonstrates improved worst-group performance over existing baselines. Review Strengths This paper focuses on improving model robustness to group shifts in a practical setting where group information is not available. The proposed method achieves SOTA worst-group performance to be close to GDRO which uses the true group labels. Some interesting empirical analysis are presented for relating the representation alignment with worst-group performance. Weakness In general, the observation is not surprising, and the idea of aligning representation for improving model robustness is not novel. There are a lot of work with similar ideas in domain generalization/adaptation literature, e.g., [1], [2]. There’s also a recent work [3] that applies contrastive learning for doing so. A more comprehensive discussion for these related work needs to be included. The assumption of Theorem 3.1 is not well explained and motivated. In particular, the assumption that “the loss function l(x; y) is 1-Lipschitz in x and bounded from above by one.” seems to be necessary and simplify the proof a lot, but does not hold for typical losses like cross-entropy for classification and MSE for regression. Though the proposed contrastive method leads to improved worst-group performance, it seems to decrease the average-case performance compared to baselines. More crucially, neither part of the two-stage method is justified with sufficient motivation and empirical evidence, as detailed below: Using ERM prediction as the group label is not convincing enough, and it is not clear how it would affect the contrastive part. It could be interesting to more extensively analyze how the label prediction affects the improvement given by the contrastive method, probably using a scientific setup where the label prediction is controlled. For the contrastive part, the current empirical comparison obfuscates the advantage on its own. To decouple it from the effect of wrong group prediction, it is important to compare in the setting where group labels are available, i.e., GDRO vs GDRO + contrastive. Also, there could be a lot of choices of negative selections but only one is used without sufficient explanations, it would be great to include more explanation or compare with some other possible choices as an ablation study. Additional questions & comments In the last paragraph of introduction, it is claimed that “...only falling short of GDRO’s worst-group accuracy by 0.3% absolute…”. However, in Table 1, the gaps between CNC and GDRO are 1.1%, 1.4%, 0.1%, 0.7%, respectively on each dataset, how is the 0.3% gap calculated? [1] Domain-Adversarial Training of Neural Networks [2] Domain Generalization with Adversarial Feature Learning [3] Cross-domain Contrastive Learning for Unsupervised Domain Adaptation
ICLR
Title Correct-N-Contrast: a Contrastive Approach for Improving Robustness to Spurious Correlations Abstract Spurious correlations pose a fundamental challenge for building robust machine learning models. For example, models trained with empirical risk minimization (ERM) may depend on correlations between class labels and spurious features to classify data, even if these relations only hold for certain data groups. This can result in poor performance on other groups that do not exhibit such relations. When group information is available during training, Sagawa et al. (2019) have shown how to improve worst-group performance by optimizing the worst-group loss (GDRO). However, when group information is unavailable, improving worst-group performance is more challenging. For this latter setting, we propose Correct-NContrast (CNC), a contrastive learning method to train models more robust to spurious correlations. Our motivating observation is that worst-group performance is related to a representation alignment loss, which measures the distance in feature space between different groups within each class. We prove that the gap between worst-group and average loss for each class is upper bounded by the alignment loss for that class. Thus, CNC aims to improve representation alignment via contrastive learning. First, CNC uses an ERM model to infer the group information. Second, with a careful sampling scheme, CNC trains a contrastive model to encourage similar representations for groups in the same class. We show that CNC significantly improves worst-group accuracy over existing state-of-the-art methods on popular benchmarks, e.g., achieving 7.7% absolute lift in worst-group accuracy on the CelebA data set, and performs almost as well as GDRO trained with group labels. CNC also learns better-aligned representations between different groups in each class, reducing the alignment loss substantially compared to prior methods. 1 INTRODUCTION For many tasks, deep neural networks are negatively affected by spurious correlations—dependencies between observed features and class labels that only hold for certain groups of the data. For example, consider classifying images of cows or camels, where 90% of cow images depict grassy backgrounds. A model may learn to predict the “cow” class based on the background, and incorrectly classify cow images with non-grass backgrounds as camels (Ribeiro et al., 2016; Beery et al., 2018; Kaufman et al., 2012). This illustrates a widespread issue where neural networks can achieve low test error on certain groups, yet high error on others (Blodgett et al., 2016; Buolamwini & Gebru, 2018; Hashimoto et al., 2018; Sagawa et al., 2019). Prior works have shown that this problem is increasingly aggravated as the correlations between class labels and spurious features become stronger (Sagawa et al., 2020) and easier to learn (Arpit et al., 2017; Hermann & Lampinen, 2020). Since spurious correlations arise in many settings, we wish to design robust methods that perform well on all groups. How can we obtain neural networks robust to spurious correlations? If group-defining information (i.e. spurious attributes) is known, a common solution is to minimize the worst-group loss, e.g., with group DRO (GDRO) (Sagawa et al., 2019). However, such information may be expensive to collect, and we may not know the spurious attributes a priori in a given data set (Oakden-Rayner et al., 2020). When group information is unavailable, prior works typically take a two-stage approach. They first train an ERM model, and then use this model to infer groups and train a more robust model. For example, Sohoni et al. (2020) find that ERM models still learn group-specific features when trained to predict class labels. After first training an ERM model, they infer groups by clustering the ERM Under review as a conference paper at ICLR 2022 Landbird, Land BG ✅ ✅ ✅ ✅ ERM Ours Landbird, Water BG Landbird, Land BG Waterbird, Land BG Waterbird, Water BG Landbird, Water BG Landbird, Land BG Waterbird, Land BG Waterbird, Water BG ❎ ✅❎✅ ERM ✓ ✗ GradCAM Correct? Correct? “Waterbird” “Landbird”“Waterbird” “Waterbird” “Landbird” “Waterbird” ✗ Sample contrastive batches “Landbird” “Waterbird” “Waterbird” model’s representations, and train a new model with GDRO using these inferred groups. Creager et al. (2021) identify groups under which an initial trained ERM model would maximally violate the invariant risk minimization (IRM) objective (Arjovsky et al., 2019). With these groups they train a new model with GDRO or IRM. Nam et al. (2020); Liu et l. (2021) observe that ERM models often misclassify data points in minority groups, and thus train another model with re-weighted or upsampled points misclassified by an initial ERM model. While these methods promisingly leverage ERM learned biases to significantly improve worst-group error without training group labels, there is still a gap between their robust performance and methods’ such as GDRO that use group labels. In this work, we ask how else we can improve model robustness using a trained ERM model, and aim to close this gap by focusing on improving the learned representations of the robust model in the second stage. We support this direction with two key motivations. First, we find that higher worst-group performance consistently correlates with hidden-layer representations exhibiting higher dependence on class labels than spurious attributes. We quantify this correlation using geometric representation alignment (Wang & Isola, 2020), which measures the closeness of samples with the same class but different spurious attributes in the model feature space, and mutual information. This relation consistently holds across various data sets, and explains when prior upweighting methods improve worst-group error over ERM (Fig. 4). Second, we theoretically show that a model’s representation alignment for a given class can be used to upper bound the gap between its worst-group and average loss for that class. Thus, if we can improve representation alignment for a class, we can reduce the gap between worst-group and average loss for that class. We thus propose Correct-N-Contrast (CNC), a two-stage procedure using contrastive learning to encourage better representation alignment within each class. In the first stage, we train a regularized ERM model similar to prior work (Liu et al., 2021; Creager et al., 2021), under the premise that ERM predictions help infer group information (i.e., spurious attributes). In the second stage, we wish to improve representation alignment by “pulling together” same-class datapoints and “pushing apart” different-class datapoints, regardless of their individual groups or spurious features. To do so via supervised contrastive learning, we use the heuristic that samples with the same ERM predictions exhibit similar spurious features (and vice versa). With a randomly sampled anchor, we select samples with the same class but different ERM predictions as “positives” we want to pull together, and samples from different classes but the same ERM prediction as hard “negatives” we want to push apart. Training a second model with this sampling scheme and supervised contrastive learning encourages this model to ignore spurious correlations that the initial ERM model learned, and improves representation alignment between same-class data points. Thus, CNC corrects for the ERM model’s mistakes with contrastive learning in the second model. We evaluate CNC on four popular and diverse spurious correlation benchmarks. Among methods that similarly do not assume training group labels, CNC substantially improves worst-group accuracy, obtaining up to 7.7% absolute lift (from 81.1% to 88.8% on CelebA) over the prior state-of-the-art JTT (Liu et al., 2021), and averaging 3.4% lift across the four tasks. We also find that CNC nearly closes the gap in worst-group accuracy with robust training methods that assume training group labels, only falling short of GDRO’s worst-group accuracy by 0.8% absolute. Finally, we validate that CNC indeed reduces the alignment loss compared to prior methods. This corresponds to an up to 71.1% smaller gap between worst-group versus average accuracy for data points in the same class. Contributions. We summarize our contributions as follows: 1. We empirically show that a model’s worst-group performance correlates with the model’s alignment loss between different groups within a class, and analyze this connection theoretically. 2. We propose CNC, a two-stage contrastive approach to improve representation alignment and thereby learn representations robust to spurious correlations. 3. We validate that CNC significantly improves worst-group accuracy over existing methods on various benchmarks, and learns better-aligned representations less reliant on spurious features. 2 PRELIMINARIES Problem setup. We present our setting and the loss objectives following Sagawa et al. (2019). Let X = {x1, . . . , xn} and Y = {y1, . . . , yn} be a training data set of size n. Each data point has an observed feature vector xi ∈ X , label yi ∈ Y , and unobserved spurious attribute ai ∈ A. The set of groups G is defined as the set of all combinations of class label and spurious attribute pairs, i.e. G = Y ×A. Let C = |Y| be the number of classes and K = |G| be the number of groups. Following the classical supervised learning setting, we assume that each example (xi, yi, ai) is drawn from an unknown joint distribution P . We assume that at least one sample from each group is observed in the training data. Let Pg be the distribution conditioning on (y, a) = g, for any g ∈ G. Given a model fθ : X 7→ RC and a convex loss ` : X × Y 7→ R, let the worst-group loss be: Lwg(fθ) := max g∈G E(x,y,a)∼Pg [`(fθ(x), y)]. (1) ERM minimizes the training loss as a surrogate for the expected population loss Lavg: Lavg(fθ) := E(x,y,a)∼P [`(fθ(x), y)] (2) While ERM is the standard way to train neural nets, spurious correlations often cause ERM to obtain high error on minority groups even when average error is low. Group DRO, which minimizes the empirical version of (1), is recognized as a strong baseline for improving worst-group error when the group labels {a1, . . . , an} are available during training (Sagawa et al., 2019). In contrast, we focus on the more challenging setting in which the group labels are not available during training. Contrastive learning. We briefly describe contrastive learning (Chen et al., 2020), a central component of our approach. Let fθ be a neural network model with parameters θ. Let the encoder fenc : X 7→ Rd be the feature representation layers of fθ. Let fcls : Rd 7→ RC be the classification layer of fθ, which maps encoder representations to one-hot label vectors. We learn fenc with the supervised contrastive loss Lsupcon proposed in Khosla et al. (2020). For each anchor x, we sample M positives {x+i }Mi=1 and N negatives {x − i }Ni=1. Let y, {y + i }Mi=1, {y − i }Ni=1 be the labels and z, {z+i }Mi=1, {z − i }Ni=1 be the normalized outputs of fenc(x) for the anchor, positives, and negatives respectively. With input x mapped to z, the training objective for the encoder is to minimize: Lsupcon(x; fenc) = E x,{x+i }Mi=1,{x − i }Nj=1 [ − log exp(z >z+i /τ)∑M m=1 exp(z >z+m/τ) + ∑N n=1 exp(z >z−n /τ) ] (3) where τ > 0 is a scalar temperature hyperparameter. Minimizing Eq. 3 leads to z being closer to z+ than z− in feature space. See Sec. 6 for further references related to contrastive learning. 3 MOTIVATIONS FOR REPRESENTATION ALIGNMENT To motivate our method, we present our core observation that a model’s worst-group accuracy correlates with how well its learned representations depends on the class labels, but not the spurious attributes. First, we empirically observe that ERM learns spurious correlations by inspecting their hidden layer representations on several spuriously correlated data sets. We find that ERM’s worstgroup performance is inversely related to a cross-group alignment loss (cf. Eq. (4) below) and mutual information metrics. Second, we theoretically prove that this alignment loss serves as an upper bound on the gap between the average-group loss and the worst-group loss (cf. Theorem 3.1). 3.1 RELATING WORST-GROUP PERFORMANCE TO REPRESENTATION ALIGNMENT We first show that when neural networks are trained with standard ERM on spuriously correlated data, their hidden layer representations exhibit high dependence on the spurious attribute. We quantify this behavior using representation alignment (cf. Eq. (4) below) and mutual information metrics. We observe that these metrics explain trends in ERM’s worst-group accuracy on various spuriously correlated data sets. This relationship is also consistent and applies to upsampling methods (JTT) that mitigate the impact of spurious features (Liu et al., 2021). We model spurious correlations with CMNIST∗, a colored MNIST data set inspired by Arjovsky et al. (2019). There are 5 digit classes and 5 colors. We color a fraction pcorr of the training samples with a color a associated with each class y, and color the test samples uniform-randomly. To analyze learned representations, we train a LeNet-5 CNN (LeCun et al., 1989) with ERM to predict digit classes, and inspect the outputs of the last hidden layer z = fenc(x). As shown in Fig. 2, with low pcorr, models learn representations with high dependence on the actual digit classes. However, with high pcorr we learn z highly dependent on a, despite only training to predict y. Representation metrics. To quantify this behavior, we use two metrics designed to capture how well the learned representations exhibit dependence on the class label vs. the spurious attributes. First, we compute an alignment loss L̂align(fenc; g, g′) between two groups g = (y, a) and g′ = (y, a′) where a 6= a′. This measures how well fenc maps samples with the same class, but different spurious attributes, to nearby vectors via Euclidean distance. Letting G and G′ be the subsets of training data in groups g and g′ respectively, and x and x′ be any two samples in G and G′, we define: L̂align(fenc; g, g′) := 1 |G| 1 |G′| ∑ (x,y,a)∈G ∑ (x′,y,a′)∈G′ ‖fenc(x)− fenc(x′)‖2. (4) Thus, lower L̂align means better alignment. We also quantify representation dependence by estimating the mutual information (MI) of a model’s learned representations with the class label, i.e. Î(Y ;Z) and the spurious attributes Î(A;Z). We defer computational details to Appendix E. (b) (e) (f) (g) (h) Results for ERM. In Fig. 3 we show a strong association between worst-group error and both alignment and mutual information metrics. As pcorr increases, ERM models not only drop in worst-group accuracy, but also incur higher alignment loss (Fig. 3ab). Fig. 3c further illustrates this with mutual information. We plot the estimated mutual information and worst-group accuracy for models at each epoch. A substantial drop in worst-group accuracy occurs with high Î(A;Z) (especially when Î(A;Z) > Î(Y ;Z), even with high Î(Y ;Z)). Fig. 3d also captures this trend with a trade off between high Î(Y ;Z) with Î(A;Z) as pcorr increases (Fig. 3a). 4 Results for JTT. In Fig. 4, we also show that this relation holds when training with another recent (upsampling) approach, JTT (Liu et al., 2021). With high pcorr, models now achieve higher worstgroup accuracy, and this corresponds to learning representations with high class label and low spurious attribute dependence. We note however that previous approaches do not explicitly optimize for these representation metrics, suggesting a new direction to improve worst-group performance. 3.2 RELATING ALIGNMENT LOSS TO WORST-GROUP LOSS The empirical observations in Fig. 3 suggest that lower alignment loss correlates with lower worstgroup error. Next, we show that this connection applies much more generally. We show that the maximum of L̂align(fenc; g, g′), over any two groups g, g′ within the same class, can be used to upper bound the gap between the worst-group loss and average loss for that class. We set up several notations before stating the result. For any class label y ∈ Y , let Gy be the set of groups with label y in G. Let Lwg(fθ; y) be the worst-group loss among groups in Gy: Lwg(fθ; y) := max g∈Gy E (x,ỹ,a)∼Pg [`(fθ(x), ỹ)] . Let Lavg(fθ; y) be the average loss among groups in Gy: Lavg(fθ; y) := E (x,ỹ,a)∼P :∀a∈A [`(fθ(x), ỹ)] . Additionally, we define a class-specific alignment loss L̂align(fenc; y) among groups in Gy . Recall that fθ involves an encoding function fenc and a linear classification layer fcls. We define L̂align(fenc; y) as the largest cross-group alignment loss among groups in Gy: L̂align(fθ; y) := max g∈Gy,g′∈Gy : g 6=g′ L̂align(fenc; g, g′). (5) where L̂align(fenc; g, g′) is the alignment loss between g and g′ defined in Eq. (4). Our main result is that L̂align(fθ; y) is an upper bound on the gap between Lwg(fθ; y) and Lavg(fθ; y) (up to a norm multiplier and a concentration error), for any y ∈ Y . Theorem 3.1 (Alignment loss upper bounds the gap between worst-group and average-group loss). In the setting described above, let fθ be any neural network satisfying that the weight matrix of the linear classification layer W in fcls satisfies that ‖W‖2 ≤ B, for some constant B. Let ng be the size of any group g ∈ G in the training data set. Assume that the loss function `(x, y) is C1-Lipschitz in x and bounded from above by C2, for some positive constants C1, C2. Then, with probability at least 1− δ over the randomness of the training data set samples, for any class y ∈ Y , the following holds: Lwg(fθ; y) ≤ Lavg(fθ; y) +B · C1 · L̂align(fθ; y) + max g∈Gy C2 √ 8 log(|Gy|/δ) ng . (6) The proof of Theorem 3.1 is deferred to Sec. B. Since we also know that Lavg(fθ; y) ≤ Lwg(fθ; y), the above result implies that in order to reduce the gap between the worst-group loss and the average loss for class y, it suffices to reduce the alignment loss L̂align(fθ; y). Broader algorithmic implications. We summarize Section 3 with two takeaways: (1) When trained on spuriously correlated data sets, ERM networks learn data representations highly dependent on spurious attributes. Clusters of these representations (Sohoni et al., 2020) or the ERM model’s outputs (Liu et al., 2021; Nam et al., 2020) can thus serve as (noisy) pseudolabels for spurious attributes. (2) Both representation metrics correlate with worst-group error, such that a viable way to improve worst-group performance is to improve representation alignment within each class. 4 CORRECT-N-CONTRAST (CNC) We now present CNC, a two-stage method to improve worst-group performance and robustness to spurious correlations, without requiring training group labels. Similar to prior works (Sohoni et al., 2020; Liu et al., 2021), our first stage trains an ERM model (with proper regularization1) on the training set, ultimately to infer group labels based on samples’ spurious attributes. 1As we train on the same data set we infer the groups on, regularization (via high weight decay or early stopping) is purely to prevent the ERM model from memorizing the class labels. This is standard practice also discussed in Sohoni et al. (2020); Liu et al. (2021). We show in Sec. 5.3 that we do not require the ERM model to perfectly learn the spurious attributes for CNC to substantially improve robustness in practice. Algorithm 1 Correct-N-Contrast (CNC) Input: Training data set (X,Y ); # positives M ; # negatives N ; learning rate η, # epochs K. Stage 1: ERM Training 1: Train a regularized ERM model fθ̂ on (X,Y ); save the predictions ŷi := fθ̂(xi). Stage 2: Supervised contrastive learning 2: for each epoch 1, . . . ,K do 3: for each anchor (x, y) ∈ (X,Y ) do 4: Let ŷ be the predicted (group) label of x from Stage 1’s ERM model. 5: Get M positives {(x+m, y+m)} where y+m = y but ŷ+m 6= ŷ, for m = 1, . . . ,M . 6: Get N negatives {(x−q , y−q )} where y−q 6= y but ŷ−q = ŷ, for q = 1, . . . , N . 7: Update fθ by θ ← θ − η · ∇L̂(fθ;x, y) (cf. Eq. (7)) with anchor, M positives, and N negatives. return final model fθ from Stage 2, and throw away the ERM model from Stage 1. The key difference is our second stage: we aim to train a more robust model by learning representations such that samples in the same class but different groups are close to each other. We use contrastive learning, as intuitively by treating samples with the same class but different spurious attributes as distinct “views” of the same class, we train the second stage model to “pull together” these samples’ representations and ignore the different spurious features. This is also inspired by Wang & Isola (2020); Robinson et al. (2021), who show that minimizing the contrastive loss improves representation alignment between distinct “views”. Later in Sec. 5.1, we verify that CNC indeed reduces L̂align(fθ; y) substantially. We include further details on both stages below, and summarize CNC in Algorithm 1. Stage 1: ERM training. We train an initial model fθ̂ on the training data set {(xi, yi)} n i=1 with ERM and regularization, and save its predictions {ŷi}ni=1 on the training data points. We consider two ways to source predictions: using the ERM model’s outputs, and clustering its last hidden-layer representations. Both approaches aim to accomplish the same goal of exploiting the ERM model’s learned spurious correlations; further details are in Appendix E.2. Stage 2: Contrastive learning (CL). Next, we train a robust model with supervised contrastive learning using the ERM predictions. While CNC is inspired by recent CL works (Chen et al., 2020; Khosla et al., 2020), we introduce new “contrastive batch” sampling and optimization objectives. Contrastive batch sampling. As described in Sec. 2, contrastive learning requires sampling anchors, positives, and negatives with the general form {x}, {x+}, {x−}. Here, we wish to sample points such that by maximizing the similarity between anchors and positives (and keeping anchors and negatives apart), the Stage 2 model “ignores” spurious similarities while learning class-consistent dependencies. With prediction set {ŷi}ni=1, for each batch we randomly sample an anchor xi ∈ X (with label yi and ERM prediction ŷi), M positives with the same class as yi but a different ERM model prediction than ŷi, and N negatives with different classes as yi but the same ERM model prediction as ŷi. For more signal per batch, we double pairwise comparisons by switching anchor and positive roles. Optimization objective and updating procedure. While our core objective is to learn aligned representations via contrastive learning, we also wish to train the full model to classify datapoints correctly. As we have the training class labels, we jointly update both the model’s encoder layers fenc with a standard contrastive loss, and the full model fθ with a cross-entropy loss: L̂(fθ;x, y) = λL̂supcon(fenc;x, y) + (1− λ)L̂cross(fθ;x, y). (7) In the above, L̂supcon(fenc;x, y) is the supervised contrastive loss of x along with its positive and negative samples, similar to Eq. (3) (see Eq. (16) in Sec. C.2 for the full equation); L̂cross(fθ;x, y) is averaged cross-entropy loss over x, the M positives, and the N negatives; λ ∈ [0, 1] is a balancing hyperparameter. As a remark, the loss objective (7) uses a single anchor in each batch in our setting. To calculate the loss, we first forward propagate one batch ( xi, {x+m}Mm=1, {x−q }Nq=1 ) through fenc and normalize them to obtain representation vectors ( zi, {z+m}Mm=1, {z−q }Nq=1 ) . To learn closely aligned zi and z+ for all {z+m}Mm=1, we update fenc with the L̂ sup out (x; fenc) loss. Finally, we also pass the unnormalized outputs of the encoder fenc to the classifier layers fcls, and compute a batch-wise cross-entropy loss L̂cross(fθ) using each batch sample’s class labels and fθ’s outputs. Due to space constraints, we include further implementation details and sampling considerations in Appendix C. 5 EXPERIMENTAL RESULTS We conduct experiments to answer the following questions: (1) Does CNC improve worst-group performance over prior state-of-the-art methods on data sets with spurious correlations? (2) Does CNC actually encourage learning hidden layer representations with greater alignment and class-labelonly dependence? How is this impacted by the strength of a spurious correlation in the data? (3) Does CNC require perfectly predicting the spurious attribute to work well in practice? Our results for each question follows in the next three subsections (5.1, 5.2, and 5.3). Due to space constraints, we defer ablations on CNC’s design choices, including the representation-learning objective and sampling procedure, to Appendix A. Additional comparison to alignment methods proposed for domain adaptation but adjusted for our setting are in Appendix A.2. Below, we briefly describe the benchmark data sets used in this section. We run CMNIST∗ with pcorr = 0.995. Further details on data sets, models, and experimental hyperparameters are deferred to Appendix E. Waterbirds (Sagawa et al., 2019): We classify Y = {waterbird, landbird}, where 95% of images have the same bird type and background A = {water background, land background}. CelebA (Liu et al., 2015): We classify celebrities’ hair colorY = {blond, not blond}withA = {male, female}. Only 6% of blond celebrities in the data set are male. CivilComments-WILDS (Borkan et al., 2019; Koh et al., 2021): We classify Y = {toxic, not toxic} comments. A denotes whether the comment mentions one of eight demographic identities. 5.1 CNC IMPROVES WORST-GROUP PERFORMANCE To study (1), we evaluate CNC on image classification and NLP data sets with spurious correlations. As baselines, we compare against standard ERM and an oracle GDRO approach that assumes access to the group labels. We also compare against recent methods that tackle spurious correlations without requiring group labels: CVaR DRO (Levy et al., 2020), GEORGE (Sohoni et al., 2020), Learning from Failure (LfF) (Nam et al., 2020), Predictive Group Invariance (PGI) (Ahmed et al., 2021), Environment Inference for Invariant Learning (EIIL) (Creager et al., 2021), Contrastive Input Morphing (CIM) (Taghanaki et al., 2021), and Just Train Twice (JTT) (Liu et al., 2021). We also compare against a CNC version without the Stage 1 ERM model, instead only sampling positives and negatives based on class (denoting this SupCon*). Results are reported in Table 1. CNC achieves highest worst-group accuracy among all methods without training group labels on the CMNIST∗ Waterbirds and CelebA data sets, while also obtaining near-SoTA worst-group accuracy on CivilComments. While LfF, GEORGE, PGI, EIIL, and JTT similarly use a trained ERM model to estimate groups, CNC uniquely uses ERM predictions to encourage the robust model to learn desirable representations via contrastive learning. We reason that with this approach, by sampling positives and negatives from the ERM predictions, CNC more directly encourages the robust model to ignore learnable spurious correlations compared to previous invariant learning, input transformation, or upweighting approaches. We include additional evidence of this via GradCAM visualizations in Appendix G. 5.2 CNC LEARNS REPRESENTATIONS LESS RELIANT ON SPURIOUS FEATURES To shed light on CNC’s worst-group accuracy gains, we investigate if models trained with CNC actually learn representations with higher alignment. Compared to ERM and JTT (the next-best performing method that does not require subgroup labels), CNC learns representations with significantly higher alignment (lower alignment loss) and lower mutual information with spurious attributes (while having comparable mutual information with class labels) (Fig. 5 and Fig. 7). We find that CNC representations exhibit the lowest alignment loss consistently for these data sets; this also corresponds to CNC models achieving the highest worst-group accuracy. Furthermore, while all methods result in representations that exhibit high mutual information with the class label (Fig. 5b), only CNC results in representations that drastically reduce mutual information with spurious attributes (Fig. 5c). In Fig. 6, we also illustrate this result on the Waterbirds data set via UMAP visualizations of the learned representations. Notably, all training methods result in representations separable by class label. Yet ERM models exhibit strong separability by spurious attributes, and JTT models interestingly also still depict some learned dependency on the spurious attribute. However, CNC uniquely learns representations that strongly depict class-label-only dependence. In addition, to study how this relation between representation metrics and worst-group accuracy scales with the strength of the spurious correlation, we compute representation metrics with CNC, ERM, and JTT models trained on increasingly spurious (↑ pcorr) CMNIST∗ data sets in Fig. 7. We observe that with high spurious correlations, ERM fails to classify digits in the minority classes, while CNC and JTT comparably maintain high worst-group accuracy. CNC also performs better in more spurious settings (pcorr > 0.95). These improvements over ERM are reflected by drops in alignment loss (averaged over classes); CNC consistently achieves lowest such loss. Fig. 7c shows that CNC’s learned representations maintain a more favorable balance of mutual information between the class label and spurious attribute than JTT. While JTT models exhibit slightly higher estimated I(Y ;Z) than CNC models, CNC models exhibit much lower dependence on the spurious attribute. 5.3 UNDERSTANDING CNC’S SENSITIVITY TO STAGE 1 PREDICTIONS Finally, we study how sensitive CNC is to how closely the Stage 1 ERM model actually predicts the spurious attribute. As JTT also relies on an initial ERM model’s predictions, we compare CNC to JTT in this regard. We find that CNC is more robust to noisy ERM predictions than JTT, and that CNC does not require perfectly inferred groups to perform well. We first conduct an ablation on CNC and JTT’s worst-group and average performance in Fig. 7d with the following synthetic experiment. On CMNIST∗, we start with the true spurious attribute labels as the Stage 1 “predictions". We then gradually degrade their quality as follows: for each point, with 6 RELATED WORK We build on prior work in group robustness and contrastive learning. Further discussion is in App. D. Robustness to group shift. A variety of approaches aim to improve performance on minority data groups. If group labels are known, many works minimize a rebalanced error similar in motivation to correcting class imbalance (He & Garcia, 2009; Cui et al., 2019) or importance weighting (Shimodaira, 2000; Byrd & Lipton, 2019). More recently, Sagawa et al. (2019) minimize worst-group loss during training. Goel et al. (2020) achieve further lift by synthetically generating additional minority group points. Cao et al. (2019) regularize updates on minority groups to improve their generalization. Another line of work aims to improve group robustness without assuming group labels for the training data. The most similar methods to CNC first train an initial ERM model with class labels as a way to infer groups, and then use these groups to train a second model with better worst-group performance. GEORGE (Sohoni et al., 2020) clusters ERM representations, and runs GDRO with these clusters as inferred groups. EIIL (Creager et al., 2021) and PGI (Ahmed et al., 2021) infer groups that maximally violate an invariance objective for the ERM model. With these groups EIIL uses either GDRO or Invariant Risk Minimization (Arjovsky et al., 2019) to train a second robust model, while PGI minimizes the KL divergence of the softmaxed logits for samples in the same class but different groups. LfF (Nam et al., 2020) use a generalized cross-entropy loss to encourage misclassifying minority groups, concurrently training a second model with these datapoints upweighted. JTT (Liu et al., 2021) trains via ERM for a few epochs, before training a second ERM model with incorrect datapoints upsampled. For image data sets, CIM (Taghanaki et al., 2021) trains a transformation network to remove potentially spurious attributes from input features. Contrastive learning (CL). CL works by predicting whether two inputs are “similar” or “dissimilar” (Le-Khac et al., 2020). This involves specifying batches of anchor and positive datapoints similar to each other (as different “views” of the same source or input), and negatives depicting dissimilar points. An encoder is trained to simultaneously maximize the similarity between the feature representations of anchors and positives, and minimize similarity between anchor and negative representations. In unsupervised CL, “negatives” are often sampled uniformly (Bachman et al., 2019), while “positives” are different views of the same object, e.g. via data augmentation (Chen et al., 2020). In supervised CL, negatives are different-class points and positives are same-class points (Khosla et al., 2020). In CNC, we instead treat same-class points with different ERM predictions as positives, and differentclass points with the same ERM prediction as negatives. This naturally provides “hard negative mining,” a challenge for standard CL (Robinson et al., 2021; Wu et al., 2021; Chuang et al., 2020). 7 CONCLUSION We present CNC, a two-stage CL approach to learn representations robust to spurious correlations. We theoretically analyze the connection between alignment and worst-group vs. average-group losses, and show that CNC achieves SOTA or near-SOTA worst-group accuracy across several benchmarks. ETHICS STATEMENT We hope that our work is another step towards the important goal of making machine learning models more fair and robust. However, while our work successfully improves worst-group accuracy, this is not necessarily an end-all be-all metric - other fairness-based metrics may be more suitable in certain settings. Also, misuse of metrics could lead to potential harm. To avoid these pitfalls, it is important for practitioners to understand the limitations and tradeoffs of different metrics, including when applying methods such as ours. REPRODUCIBILITY STATEMENT We have submitted our code as part of the supplementary materials. The datasets we use are publicly available (with the exception of CMNIST∗ which is a modification of the standard MNIST dataset (LeCun et al., 2010); our code to generate this modified dataset is also included). In addition to the details provided in Section 5, further implementation, dataset, and experimental details can be found in Appendix E. For the theory, we include complete proofs of all claims in Appendix B. A ADDITIONAL BENCHMARK COMPARISONS AND ABLATIONS In this section, we include further experiments comparing CNC against additional related methods. We also include additional ablations to study the importance of CNC’s presented design choices. A.1 COMPARISON TO MINIMIZING THE ALIGNMENT LOSS DIRECTLY In Sec. 5.1 and Sec. 5.2, we empirically showed that CNC’s contrastive loss and hard positive and negative sampling lead to improved worst-group accuracy and greater representation alignment. We now study how CNC performs if instead of the contrastive loss, we train the Stage 2 model to minimize Lalign directly. With this objective, we aim to minimize the Euclidean distance between samples in different inferred groups but the same class. We keep all other components of CNC consistent, and apply Lalign to the anchor and positive samples in each contrastive batch. We report results on CMNIST∗, Waterbirds, and CelebA in Table A.1. We find that CNC with the default contrastive loss outperforms CNC with the alignment loss. We reason that an advantage of the contrastive loss (and specifically the “hard” positive and negative samples), is that it encourages aligning samples with the same class label but different spurious features, and pushes apart hard negative samples with different class labels but similar spurious features. This provides additional signal for improving separation between the different classes, so the robust model only learns to rely on ground-truth-specific features for discriminating between datapoints. On the other hand, the Lalignment objective does not incorporate these hard negatives. A.2 COMPARISON TO REPRESENTATION ALIGNMENT METHODS FOR DOMAIN GENERALIZATION AND ADAPTATION While our main results in Table 1 compare against methods designed to tackle the spurious correlations setting presented in Section 5.1, we now study how CNC fares against existing representation alignment methods proposed in the domain generalization (DG) and unsupervised domain adaptation (UDA) literature. At a high level, a popular idea in DG and UDA is to learn similar representations for datapoints with the same class but sampled from different domains, e.g. via adversarial training to prevent another model from classifying representations’ source domains correctly (Ganin et al., 2016), or minimizing representation differences via metrics such as maximum mean discrepancy (MMD) (Li et al., 2018). While DG and UDA carry distinct problem settings and assumptions from our spurious correlations setting (c.f. Appendix D.4), we aim to understand if existing representation alignment methods can train models robust to spurious correlations, and compare their performance with CNC. We first explain our protocol for evaluating these methods, and then discuss results. We carry out our evaluation with domain-adversarial neural networks (DANN) Ganin et al. (2016), a seminal UDA method that aims to learn aligned representations across two domains. To do so, DANN jointly trains a model to classify samples from a “source” domain while preventing a separate “domain classifier” module from correctly classifying the domain for datapoints sampled from both domains. For fair comparison, we use the same ResNet-50 backbone as in CNC, and make several adjustments to the typical DANN and UDA procedure: 1. While UDA assumes that the data is organized into “source” and “target” domains, we do not have domain labels. We thus infer domains using the predictions of an initial ERM model as in CNC. 2. The notion of a domain may also be ambiguous with respect to the groups defined in Section 2. For example, domains may be defined by spurious attributes (e.g., for the Waterbirds dataset, we may consider the “water background” domain and the “land background” domain). Domains may alternatively be defined by whether samples carry dominant spurious correlations or not (e.g., the “majority group” domain and the “minority group” domain). We train and evaluate separate DANN models for both interpretations. We infer the former by the predicted class of the initial ERM model. We infer the latter by whether the initial ERM model is correct or not. 3. Finally, UDA aims to train with a class-labeled “source” domain and an unlabeled “target” domain such that a model performs well on unseen samples from the specified “target” domain (Ganin et al., 2016). However, our benchmarks have class labels for all training points, and do not have a notion of “source” and “target” domains (we aim to obtain high worst-group accuracy, which could fall under any domain). We thus assume access to labels for all domains. During training, the goal for our DANN models is to correctly classify samples from both domains, while learning representations such that a jointly trained domain classifier module cannot determine the samples’ domains from their representations alone. At test-time, we evaluate the DANN model on the entire test set for each benchmark, and report the worst-group and average accuracies. In Table A.2, we report the worst-group and average accuracies of DANN on the Waterbirds and CelebA datasets across three seeds along with the CNC results. Our results suggest that the domain alignment in DANN is not sufficient to improve worst-group accuracy. We hypothesize this is due to adversarial training with the domain classifier aligning representations without regard to different classes within each domain. Due to the propensity of samples exhibiting spurious correlations, DANN models may thus still learn to rely on these correlations. A.3 IMPORTANCE OF ERM-GUIDED CONTRASTIVE SAMPLING In this section we conduct additional ablations on the sampling procedure in CNC. Although CNC relies on an initial trained ERM model’s predictions, can we still improve worst-group accuracy without this step and with supervised contrastive learning alone, i.e. by sampling positives uniform randomly from all datapoints with the same label as the anchor? In Table 1, we showed that this approach (denoted SupCon∗) led to a drop in worst-group accuracy. Taking this question further, while we use the Stage 1 ERM model’s predictions to sample “hard” negatives with different groundtruth classes and the same ERM predictions as their anchors—such that to reduce the contrastive loss and learn dissimilar representations for anchors and negatives, the Stage 2 contrastive model must thus learn to ignore spurious features that the initial ERM model learns to depend on—how does CNC’s performance fare with alternative negative sampling procedures? Keeping the anchor and positive sampling consistent, we perform additional ablations where we either sample negatives only by having different classes as their anchors, or sample negatives only be having the same ERM model prediction as their anchors. We report these results in Table A.3 below. We find that the default CNC sampling procedure obtains highest worst-group accuracy and highest or near-highest average accuracy compared to alternative strategies across the CMNIST∗, Waterbirds, and CelebA datasets. The results suggests that inferring the spurious attributes (e.g. via an initial ERM model) is important for CNC, and that CNC benefits from using these predictions for sampling both negatives and positives. We reason this is because without this sampling, we can actually encourage the Stage 2 model to rely on spurious correlations. For example, if we just ensure that the anchor and negative samples have different classes, then the contrastive model may just rely on the different spurious features of the anchors and negatives to learn dissimilar representations. However, by ensuring that the anchors and negatives have similar spurious features (via the same trained ERM model prediction), the contrastive model is forced to rely on non-spurious features to learn dissimilar Negatives by different class 66.4 (5.1) 86.0 (1.6) 82.2 (0.8) 88.9 (0.3) 79.2 (0.3) 88.0 (0.1) Negatives by same prediction 70.0 (5.1) 87.1 (1.1) 85.7 (1.3) 90.3 (0.2) 81.1 (1.4) 88.5 (0.3) SupCon∗ 0.0 (0.0) 22.4 (1.2) 71.0 (1.9) 85.9 (0.8) 62.2 (1.1) 90.0 (0.1) CNC (default) 77.4 (3.0) 90.9 (0.6) 89.7 (0.2) 90.8 (0.1) 88.8 (0.9) 89.9 (0.5) representations for the samples. The same logic applies for learning similar representations for anchor and positive samples. We suspect that choosing negatives from all samples with the same ERM prediction as their anchors performs better than the other ablations as it alone does not encourage learning spurious correlations: the model is asked to “pull apart” samples with the same spurious features, and so must ignore spurious similarities to recognize something different between anchors and negatives. However, this ablation does not ensure that anchor-negative pairs consist of different classes (which our full method does), so the model gets less signal to separate samples by class. A.4 ADDITIONAL DESIGN CHOICE ABLATIONS We first summarize CNC’s design choices and differences from standard supervised contrastive learning in Appendix A.4.1. We then empirically validate each component in Appendix A.4.2. A.4.1 SUMMARY OF CNC DESIGN CHOICES AND PROPERTIES No projection network. As we wish to learn data representations that maximize the alignment between anchor and positive datapoints, we do not compute the contrastive loss with the outputs of an additional nonlinear projection network. This is inspired by the logic justifying a projection head in prior contrastive learning, e.g. SimCLR (Chen et al., 2020), where the head is included because the contrastive loss trains representations to be “invariant to data transformation” and may encourage removing information “such as the color or orientation of objects”. In our case, we view inferred datapoints with the same class but different spurious attributes as “transformations” of each other, and we hypothesize that removing these differences can help us improve worst-group performance. Two-sided contrastive sampling. To incorporate additional comparisons between datapoints that only differ in spurious attribute during training, we employ “two-sided” contrastive batch sampling. This lets us equally incorporate instances where the second contrastive model in CNC treats datapoints that the initial ERM model got incorrect and correct as anchors. Additional intrinsic hard positive/negative mining. Because the new model corrects for potentially learned spurious correlations by only comparing and contrasting datapoints that differ in class label or spurious attribute, but not both (as dictated by the initial ERM model’s outputs), the contrastive batches naturally carry “hard” positives and negatives. Thus, our approach provides a natural form of hard negative mining (in addition to the intrinsic hard positive / negative mining at the gradient level with InfoNCE-style contrastive losses (Chen et al., 2020; Khosla et al., 2020)) while avoiding class collisions, two nontrivial challenges in standard self-supervised contrastive learning (Robinson et al., 2021; Wu et al., 2021; Chuang et al., 2020). Joint training of encoder and classifier layers. CNC can train any standard classification model architecture; for any given neural network we just apply different optimization objectives to the encoder and classifier layers. We train both the encoder and classifier layers with a cross-entropy loss, and jointly train the encoder layer with a supervised contrastive loss. For the encoder layers, we balance the two objectives with a hyperparameter λ (c.f. Eq. 7). A.4.2 EMPIRICAL VALIDATION OF CNC COMPONENTS To validate the additional algorithmic components of CNC, we report how CNC performs on the Waterbirds dataset when modifying the individual design components. We use the same hyperpa- rameters as in the main results, and report accuracies as the average over three training runs for the following ablations. Table A.4 summarizes that across these design ablations, default CNC as presented consistently outperforms these alternative implementations. No projection head. We incorporate a nonlinear projection head as is typical in prior contrastive learning works (Chen et al., 2020), that maps the encoder output to lower-dimensional representations (from 2048 to 128 in our case). We then update the encoder layers and the projection head jointly by computing the contrastive loss on the projection head’s output, still passing the encoder layer’s direct outputs to the classifier to compute the cross-entropy loss. We note that using the projection head decreases worst-group accuracy substantially. We reason that as previously discussed, while using the projection head in prior work can allow the model to retain more information in its actual hidden layers (Chen et al., 2020), in our case to remove dependencies on spurious attributes we actually want to encourage learning invariant representations when we model the differences between anchor and positive datapoints as due to spurious attributes. Two-sided contrastive batches. Instead of “two-sided” contrasting where we allow both sampled anchors and positives to take on the anchor role, for each batch we only compute contrastive updates by comparing original positives and negatives with the original anchor. When keeping everything else the same, we find that just doing these one-sided comparisons also leads to a drop in performance for worst-group accuracy. This suggests that the increased number of comparisons and training setup where we swap the roles of anchors and positives of the two-sided batches introduces greater contrastive learning signal. Additional intrinsic hard positive/negative mining. We discuss this ablation in Section A.3. Joint training of encoder and classifier layers. Instead of training the full model jointly, we first only train the encoder layers with the contrastive loss in CNC, before freezing these layers and finetuning the classifier layers with the cross-entropy loss. With this implementation, we also obtain noticeable drop in performance. While we leave further analysis for the joint cross-entropy and contrastive optimization for future work, one conjecture is that the cross-entropy loss may aid in learning separable representations while also training the full model to keep the average error small. From our theory, the contrastive loss can help bound the gap between worst-group and average error. Thus we try to minimize average error in the same parameter update. This also follows prior work, where updating the entire model and finetuning all model parameters instead of freezing the encoder layers leads to higher accuracy (Chen et al., 2020). However, we found that with an initial encoder-only training stage, if we did not freeze the trained layers the fine-tuning on a dataset with spurious correlations would “revert” the contrastive training, resulting in a large gap between worst-group and average error similar to ERM. We also ablate the balancing hyperparameter λ of CNC on CMNIST∗. In Table A.5 we find that CNC consistently achieves high worst-group accuracy across a wide range of λ ∈ [0.4, 0.9]. For reference, the next best methods GEORGE and JTT obtain 76.4% and 74.5% worst-group accuracy. B OMITTED PROOFS FROM SECTION 3.2 In this section, we prove that within any class, the gap between the worst-group error and the average error can be upper bounded by the alignment loss times the Lipschitz constant, plus another concentration error term. Proof of Theorem 3.1. Consider two arbitrary groups, denoted by g1 = (y, a1) and g2 = (y, a2), whose class labels are both y ∈ Y , whose spurious attributes are a1 ∈ A and a2 ∈ A such that a1 6= a2. Let G1 and G2 be the subset of training data that belong to groups g1 and g2, respectively. We note that both G1 and G2 are non-empty since we have assumed that (in Section 2) there is at least one sample from each group in the training data set. Let ng1 = |G1| and ng2 = |G2| be the size of these two groups, respectively. Recall that fenc denotes the mapping of the encoder layers of the full neural network model fθ. Since the classification layer fcls is a linear layer, we have used W to denote the weight matrix of this layer. Our definition of the cross-group alignment loss in equation (5), denoted as L̂align(fθ; y), implies that for g1 and g2, 1 ng1 1 ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 ‖fenc(x)− fenc(x′)‖2 ≤ L̂align(fθ; y). (8) Next, let E(x,y,a1)∼Pg1 [Lavg(Wfenc(x), y)] be the average loss conditioning on a data point being sampled from group g1 (and similarly for group g2). Let ∆(g1, g2) be the difference between the population average losses: ∆(g1, g2) = ∣∣∣∣∣ E(x,y,a1)∼Pg1 [Lavg(Wfenc(x), y]− E(x,y,a2)∼Pg2 [Lavg(Wfenc(x), y)] ∣∣∣∣∣. Recall that Gy ⊆ G is the set of groups that have class label y. Since the loss `(·) is bounded above by some fixed constant C2 according to our assumption, and is at least zero, by the Hoeffding’s inequality, the following result holds with probability at least 1− δ, for all |Gy| groups g ∈ Gy ,∣∣∣∣∣∣ E(x,y,a)∼Pg [Lavg(Wfenc(x), y)]− 1ng ∑ (x,y)∈(X,Y ) `(Wfenc(x), y) ∣∣∣∣∣∣ ≤ C2 √ 2 log (|Gy| /δ) ng . (9) Thus, with probability at least 1 − δ, the following holds for any g1 and g2 in class y (but having different spurious attributes) ∆(g1, g2) ≤ ∣∣∣∣∣∣ 1ng1 ∑ (x,y,a1)∈G1 Lavg(Wfenc(x), y)− 1 ng2 ∑ (x′,y,a2)∈G2 Lavg(Wfenc(x′), y) ∣∣∣∣∣∣ (10) + C2 (√ 2 log(|Gy| /δ) ng1 + √ 2 log(|Gy| /δ) ng2 ) . Next, we focus on the RHS of equation (10). First, equation (10) is also equal to the following:∣∣∣∣∣∣ 1ng1 1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 `(Wfenc(x), y))− 1 ng1 1 ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 `(Wfenc(x ′), y)) ∣∣∣∣∣∣ . Since we have also assumed that the loss function `(x, y) is C1-Lipschitz in x2, the above is at most:∣∣∣∣∣∣ 1ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 |`(Wfenc(x), y)− `(Wfenc(x′), y)| ∣∣∣∣∣∣ ≤ 1 ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 C1 · ‖Wfenc(x)−Wfenc(x′)‖2 (since y is the same for x, x′) ≤ B ng1ng2 ∑ (x,y,a1)∈G1 ∑ (x′,y,a2)∈G2 C1 · ‖fenc(x)− fenc(x′)‖2 (because ‖W‖2 ≤ B as assumed) ≤B · C1 · L̂align(fθ; y). (because of equation (8)) 2In other words, we assume that |`(z, y)− `(z′, y)| ≤ C1 · ‖z − z′‖2, for any z, z′ and y. Thus, we have shown that for any g1 and g2 within class y, ∆(g1, g2) ≤ B · L̂align(fθ; y) + (√ 2 log(|Gy| /δ) ng1 + √ 2 log(|Gy| /δ) ng2 ) ≤ B · C1 · L̂align(fθ; y) + max g∈Gy C2 · √ 8 log(|Gy| /δ) ng . (11) Finally, we use the above result to bound the gap between the worst-group loss and the average loss. For every group g ∈ G, let pg denote the prior probability of observing a sample from P in this group. Let qy = ∑ g′∈Gy pg′ . Let h(g) be a short hand notation for h(g) = E (x,y,a)∼Pg [Lavg(Wfenc(x), y)] . The average loss among the groups with class label y is Lavg(fθ; y) = ∑ g∈Gy pg qy h(g). The worstgroup loss among the groups with class label y is Lwg(fθ; y) = maxg∈Gy h(g). Let g? be a group that incurs the highest loss among groups in Gy . We have Lwg(fθ; y)− Lavg(fθ; y) is equal to h(g?)− ∑ g∈Gy pg qy h(g) = ∑ g∈Gy pg qy (h(g?)− h(g)) (12) ≤ ∑ g∈Gy pg qy ∆(g?, g) (13) ≤B · C1 · L̂align(fθ; y) + max g∈Gy C2 · √ 8 log(|G| /δ) ng . (14) The last step uses equation (11) on ∆(g?, g) and the fact that qy = ∑ g′∈Gy pg′ . Thus, we have shown that the gap between the worst-group loss and the average loss among the groups with the same class label is bounded by the above equation. The proof is now complete. The astute reader will note that Theorem 3.1 focuses on comparing groups within the same class y, for any y ∈ Y . A natural follow-up question is what happens when comparing across groups with different labels. Let Lwg(fθ) = maxy∈Y Lwg(fθ; y) be the worst-group loss across all the labels. Recall that Lavg(fθ) is the average loss for the entire population of data. We generalize Theorem 3.1 to this setting in the following result. Corollary B.1 (Extension of Theorem 3.1 to compare across different classes). In the setting of Theorem 3.1, let qy = ∑ g∈Gy pg be the prior probability of observing a sample drawn from P with label y, for any y ∈ Y . We have that with probability at least 1− δ, the following holds: Lwg(fθ) ≤ ( min y∈Y qy )−1 Lavg(fθ) +B · C1 ·max y∈Y L̂align(fθ; y) + max g∈G C2 · √ 8 log(|G| /δ) ng . (15) Proof. We generalize the argument in the previous result to compare across different labels. The worst-group loss across different labels is max y∈Y max g∈Gy h(g) ≤max y∈Y ∑ g∈Gy pg qy h(g) +B · C1L̂align(fθ; y) + max g∈Gy C2 √ 8 log(|Gy| /δ) ng (because of equation (14)) ≤ 1 miny∈Y qy ∑ g∈Gy pgh(g) +B · C1 max y∈Y L̂align(fθ; y) + max g∈G C2 √ 8 log(|G| /δ) ng . Since ∑ g∈G pgh(g) = Lavg(fθ), we thus conclude that Lwg(fθ) ≤ ( min y∈Y qy )−1 Lavg(fθ) +B · C1 max y∈Y L̂align(fθ; y) + max g∈G C2 √ 8 log(|G| /δ) ng . The proof is now complete. An example showing that Corollary B.1 is tight. We describe a simple example in which the factor( miny∈Y qy )−1 in equation (15) is tight (asymptotically). Suppose there are k perfectly balanced classes so that qy = 1/k, for every y ∈ Y . There is one data point from each class, with loss equal to 0 for all except one of them. The worst-group loss is 1 whereas the average loss is 1/k. Thus, there is a factor of k between the worst-group loss and the average loss. For equation (15), the factor( min y∈Y qy )−1 = k, since qy = 1/k for every y ∈ Y in this example. Thus, this factor matches the (multiplicative) factor between the worst-group loss and the average loss in this example. C CONTRASTIVE ALGORITHM DESIGN DETAILS In this section, we provide further details on the training setup and contrastive batch sampling, pseudocode, and additional properties related to CNC’s implementation. C.1 TRAINING SETUP In Fig. 8, we illustrate the two training stages of Correct-N-Contrast described in Sec. 4. In Stage 1, we first train an ERM model with a cross-entropy loss. For consistency with Stage 2, we depict the output as a composition of the encoder and linear classifier layers. Then in Stage 2, we train a new model with the same architecture using contrastive batches sampled with the Stage 1 ERM model and a supervised contrastive loss (3) (which we compute after the depicted representations are first normalized) to update the encoder layers. Note that unlike prior work in contrastive learning (Chen et al., 2020; Khosla et al., 2020), as we have the class labels of the anchors, positives, and negatives, we also continue forward-passing the unnormalized representations (encoder layer outputs) and compute a cross-entropy loss to update the classifier layers while jointly training the encoder. 2048-D 2-D We also note that unlike prior work, we wish to learn invariances between anchors and positives that maximally reduce the presence of features not needed for classification. We thus do not pass the representations through an additional projection network (Chen et al., 2020). Instead, we use Eq. 3 to compute the supervised contrastive loss directly on the encoder outputs z = fenc(x). In Appendix A.4.2, we studied ablations with both design choices. C.2 TWO-SIDED CONTRASTIVE BATCH IMPLEMENTATION We provide more details on our default contrastive batch sampling approach described in Sec. 4. To recall, for additional contrastive signal per batch, we can double the pairwise comparisons in a training batch by switching the anchor and positive roles. This is similar to the NT-Xent loss in prior contrastive learning work (Chen et al., 2020). We switch the role of the anchor and first positive sampled in a contrastive batch, and sample additional positives and negatives using the same guidelines but adjusting for the “new” anchor. We denote this as “two-sided” sampling in contrast with the “one-sided” comparisons we get with just the original anchor, positives, and negatives. Implementing this sampling procedure in practice is simple. First, recall our initial setup with trained ERM model fθ̂, its predictions {ŷi} n i=1 on training data {(xi, yi)}ni=1 (where ŷi = fθ̂(xi)), and number of positives and negatives to sample M and N . We then sample batches with Algorithm 2. Because the initial anchors are then datapoints that the ERM model gets correct, under our heuristic we infer {xi}Mi=1 as samples from the majority group. Similarly the M positives {x+m}Mm=1 and N negatives {x−n }Nn=1 that it gets incorrect are inferred to belong to minority groups. For one batch, we then compute the full contrastive loss with L̂supcon(fenc) = L̂supcon ( x1, {x+m}Mm=1, {x−n }Nn=1; fenc ) + L̂supcon ( x+1 , {xi}Mi=1, {x′−n }Nn=1; fenc ) (16) where L̂supcon ( x1, {x+m}Mm=1, {x−n }Nn=1; fenc ) is given by: − 1 M M∑ m=1 log exp(z>1 z + m/τ)∑M m=1 exp(z > 1 z + m/τ) + ∑N n=1 exp(z > 1 z + n /τ) (17) Algorithm 2 Sampling two-sided contrastive batches Require: Number of positives M and number of negatives N to sample for each batch. 1: Initialize set of contrastive batches B = {} 2: for each xi ∈ {xi ∈ X : ŷi = yi} do 3: Sample M − 1 additional “anchors” to obtain {xi}Mi=1 from {xi ∈ X : ŷi = yi} 4: Sample M positives {x+m}Mm=1 from {x−m ∈ X : ŷ−m = ŷi, y−m 6= yi} 5: Sample N negatives {x−n }Nn=1 from {x−n ∈ X : ŷ−n = ŷi, y−n 6= yi} 6: Sample N negatives {x′−n }Nn=1 from {x′−n ∈ X : ŷ′−n = ŷ+1 , y′−n 6= y + 1 } 7: Update contrastive batch set: B ← B ∪ ( {xi}Mi=1, {x+m}Mm=1, {x−n }Nn=1, {x′−n }Nn=1 ) and again let z be the normalized output fenc(x) for corresponding x. We compute the cross-entropy component of the full loss for each x in the two-sided batch with its corresponding label y. D FURTHER RELATED WORK DISCUSSION We provide additional discussion of related work and connections to our work below. D.1 IMPROVING ROBUSTNESS TO SPURIOUS CORRELATIONS Our core objective is to improve model robustness to group or subpopulation distribution shifts that arise from the presence of spurious correlations, specifically for classification tasks. Because these learnable correlations hold for some but not all samples in a dataset, standard training with ERM may result in highly variable performance: a model that classifies datapoints based on spurious correlations does well for some subsets or “groups” of the data but not others. To improve model robustness and avoid learning spurious correlations, prior work introduces the goal to maximize worst-group accuracy (Sagawa et al., 2019). Related works broadly fall under two categories: Improving robustness with group information. If information such as spurious attribute labels is provided, one can divide the data into explicit groups as defined in Sec. 2, and then train to directly minimize the worst group-level error among these groups. This is done in group DRO (GDRO) (Sagawa et al., 2019), where the authors propose an online training algorithm that focuses training updates over datapoints from higher-loss groups. Goel et al. (2020) also adopt this approach with their method CycleGAN Augmented Model Patching (CAMEL). However, similar to our motivation, they argue that a stronger modeling goal should be placed on preventing a model from learning group-specific features. Their approach involves first training a CycleCAN (Zhu et al., 2017) to learn the data transformations from datapoints in one group to another that share the same class label. They then apply these transformations as data augmentations to different samples, intuitively generating new versions of the original samples that take on group-specific features. Finally they train a new model with a consistency regularization objective to learn invariant features between transformed samples and their sources. Unlike their consistency loss, we accomplish a similar objective to learn group-invariant features with contrastive learning. Our first training stage is also less expensive. Instead of training a CycleGAN and then using it to augment datapoints, we train a relatively simple standard ERM classification model, sometimes with only a few number of epochs, and use its predictions to identify pairs of datapoints to serve a similar purpose. Finally, unlike both CAMEL and GDRO, we do not require spurious attribute or group labels for each training datapoints. We can then apply CNC in less restrictive settings where such information is not known. Related to GDRO are methods that aim to optimize a "Pareto-fair" objective, more general than simply the worst-case group performance. Notable examples are the works of Balashankar et al. (2019) and Martinez et al. (2020). However, these approaches similarly do not directly optimize for good representation alignment (unlike our work). Improving robustness without training group information. More similar to our approach are methods that do not assume group information at training time, and only require validation set spurious attribute labels for fine-tuning. As validation sets are typically much smaller in size than training sets, an advantage of CNC and comparable methods is that we can improve the accessibility of robust training methods to a wider set of problems. One popular line of work is distributionally robust optimization (DRO), which trains models to minimize the worst loss within a ball centered around the observed distribution (Ben-Tal et al., 2013; Wiesemann et al., 2014; Duchi & Namkoong, 2019; Levy et al., 2020; Curi et al., 2020; Oren et al., 2019). This includes the CVaR DRO (Levy et al., 2020) method we evaluate against. However, prior work has shown that these approaches may be too pessimistic, optimizing not just for worst-group accuracy but worst possible accuracy within the distribution balls (Sagawa et al., 2019), or too undirected, optimizing for too many subpopulations, e.g. by first upweighting minority points but then upweighting majority points in later stages of training (Liu et al., 2021). Pezeshki et al. (2020) instead suggest that gradient starvation (GS), where neural networks only learn to capture statistically dominant features in the data (Combes et al., 2018), is the main culprit behind learning spurious correlations, and introduce a “spectral decoupling” regularizer to alleviate GS. However this does not directly prevent models from learning dependencies on spurious attributes. Similar to CAMEL, Taghanaki et al. (2021) propose Contrastive Input Morphing (CIM), an image dataset-specific method that aims to learn input feature transformations that remove the effects of spurious or task-irrelevant attributes. They do so without group labels, training a transformation network with a triplet loss to transform input images such that a given transformed image’s structural similarity metric (based on luminance, contrast, and structure (Wang et al., 2003)) is more similar to a “positive” image from the same class than a “negative” image from a different class. They then train a classifier on top of these representations. Instead of pixel-level similarity metrics, CNC enforces similarity in a neural network’s hidden-layer representations, allowing CNC to apply to non-image modalities. Additionally, we sample positives and negatives not just based on class label, but also the learned spurious correlations of an ERM model (via its trained predictions). We hypothesize that our sampling scheme, which intuitively provides "harder" positive and negative examples, allows CNC to more strongly overcome spurious correlations. Most similar to our approach are methods that first train an initial ERM model with the class labels as a way to identify data points belonging to minority groups, and subsequently train an additional model with greater emphasis on the estimated minority groups. Sohoni et al. (2020) demonstrate that even when only trained on the class labels, neural networks learn feature representations that can be clustered into groups of data exhibiting different spurious attributes. They use the resulting cluster labels as estimated group labels before running GDRO on these estimated groups. Meanwhile, Nam et al. (2020) train a pair of models, where one model minimizes a generalized cross-entropy loss (Zhang & Sabuncu, 2018), such that the datapoints this model classifies incorrectly largely correspond to those in the minority group. They then train the other model on the same data but upweight the minority-group-estimated points. While they interweave training of the biased and robust model, Liu et al. (2021) instead train one model first with a shortened training time (but the standard cross-entropy objective), and show that then upsampling the incorrect data points and training another model with ERM can yield higher worst-group accuracy. Creager et al. (2021) first train an ERM model, and then softly assign the training data into groups under which the initial trained ERM model would maximally violate the invariant risk minimization (IRM) objective. In particular, the IRM objective is maximally satisfied if a model’s optimal classifier is the same across groups (Arjovsky et al., 2019), and EIIL groups are inferred such that the initial ERM model’s representations exhibit maximum variance within each group. Finally, Nagarajan et al. (2020) provides a theoretical understanding of how ERM picks up spurious features under data set imbalance. They consider a setting involve a single spurious feature that is correlated with the class label and analyze the max-margin classifier in the presence of this spurious feature. In our work, we demonstrate that the ERM model’s predictions can be leveraged to not only estimate groups and train a new model with supervised learning but with different weightings. Instead, we can specifically identify pairs of points that a contrastive model can then learn invariant features between. Our core contribution comes from rethinking the objective with a contrastive loss that more directly reduces the model’s ability to learning spurious correlations. D.2 CONTRASTIVE LEARNING Our method also uses contrastive learning, a simple yet powerful framework for both self-supervised (Chen et al., 2020; Oord et al., 2018; Tian et al., 2019; Song & Ermon, 2020; Sermanet et al., 2018; Hassani & Khasahmadi, 2020; Robinson et al., 2021) and supervised (Khosla et al., 2020; Gunel et al., 2021) representation learning. The core idea is to learn data representations that maximize the similarity between a given input “anchor” and distinct different views of the same input (“positives”). Frequently this also involves contrasting positives with “negative” data samples without any assumed relation to the anchor (Bachman et al., 2019). Core components then include some way to source multiple views, e.g. with data transformations (Chen et al., 2020), and training objectives similar to noise contrastive estimation (Gutmann & Hyvärinen, 2010; Mnih & Kavukcuoglu, 2013). An important component of contrastive learning is the method by which appropriate positives and negatives are gathered. For sampling positives, Chen et al. (2020) show that certain data augmentations (e.g. crops and cutouts) may be more beneficial than others (e.g. Gaussian noise and Sobel filtering) when generating anchors and positives for unsupervised contrastive learning. von Kügelgen et al. (2021) theoretically study how data augmentations help contrastive models learn core content attributes which are invariant to different observed “style changes”. They propose a latent variable model for self-supervised learning. Tian et al. (2020) further study what makes good views for contrastive learning. They propose an “InfoMin principle”, where anchors and positives should share the least information necessary for the contrastive model to do well on the downstream task. For sampling negatives, Robinson et al. (2021) show that contrastive learning also benefits from using “hard” negatives, which (1) are actually a different class from the anchor (which they approximate in the unsupervised setting) and (2) embed closest to the anchor under the encoder’s current data representation. Both of these approaches capture the principle that if positives are always too similar to the anchor and negatives are always too different, then contrastive learning may be inefficient at learning generalizable representations of the underlying classes. In our work, we incorporate this principle by sampling data points with the same class label but different ERM predictions–presumably because of spurious attribute differences–as anchor and positive views, while sampling negatives from data points with different class labels but the same ERM prediction as the anchor. The anchors and positives are different enough that a trained ERM model predicted them differently, while the anchors and negatives are similar enough that the trained ERM model predicted them the same. Contrasting the above then allows us to exploit both “hard” positive and negative criteria for our downstream classification task. In Appendix A.3, we show that removing this ERM-guided sampling (i.e. only sampling positives and negatives based on class information), as well as trying different negative sampling procedures, leads to substantially lower worst-group accuracy with CNC. One limitation of our current theoretical analysis regarding the alignment loss (cf. Section 3.2) is that we require knowing the group labels to compute the RHS of equation (6) (in particular, the alignment loss). An interesting question for future work is to provide a better theoretical understanding of the alignment induced by CNC in the context of spurious correlations. D.3 LEARNING INVARIANT REPRESENTATIONS Our work is also similar in motivation to Invariant Risk Minimization (IRM) (Arjovsky et al., 2019), Predictive Group Invariance (PGI) (Ahmed et al., 2021), and other related works in domain-invariant learning (Krueger et al., 2020; Parascandolo et al., 2020; Ahuja et al., 2020; Creager et al., 2021). These methods aim to train models that learn a single invariant representation that is consistently optimal (e.g. with respect to classifying data) across different domains or environments. These environments can be thought of as data groups, and while traditionally methods such as IRM require that environment labels are known, recent approaches such as Environment Inference for Invariant Learning (EIIL) (Creager et al., 2021) and Predictive Group Invariance (PGI) (Ahmed et al., 2021) similarly aim to infer environments with an initial ERM model. In EIIL, they next train a more robust model with an invariant learning objective, similarly selecting models based on the worst-group error on the validation set. However, they train this model using IRM or Group DRO with the inferred environments as group labels
1. What is the focus and contribution of the paper regarding subgroup robustness? 2. What are the strengths of the proposed two-stage method, particularly in terms of learned representations and experimental performance? 3. What are the weaknesses of the paper, especially regarding the theoretical proof and its limitations? 4. Do you have any concerns or questions regarding the training details and their importance in the method's functioning? 5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Review
Summary Of The Paper This paper discusses a two-stage method for improving a model's subgroup robustness, first training an ERM classifier and then performing contrastive learning on the representations. They provide theoretical justification for this procedure and experimental verification of its performance. Review Strengths: it's a good idea to explore the connection between learned representations and subgroup performance they show fairly clearly that in the explored methods, better separated representations often yield better predictions the method seems to perform well experimentally Weaknesses: I'm not sure I totally buy the proof being that intuitively useful - in particular, due to the bound B on the weight matrix. My hesitancy is because the weight matrix and the representations are learned jointly - in fact, we could get equivalent predictions by scaling the weight matrix down and the representations up. Also, the Lipschitz and boundedness constraints on the loss functions do not really apply in any of the settings explored experimentally. I get a little lost in Sec 5.2. I don't what understand the role of ERM "predicting the sensitive attribute" - I thought the point was for ERM to predict the label? and how do ERM's predictions of the sensitive attribute play into the CNC algorithm? there are some training details buried in the appendix which seem worth discussing - for instance the clustering-based prediction from the first step ERM model seems like an unintuitive step which may be fairly important to the functioning of the method. I would like to see this discussed in the main body, possibly with an ablation study. In my experience, clustering approaches can be quite helpful for these types of problems and I would like to know a bit more about the role it plays, given that it is far from the first thing you would think of doing (which would be just using the standard linear layer) Other thoughts: In Figure 3, the relationship I would really like to see is L_align vs Accuracy: this is the one that makes your point most compellingly In Fig 3c, I disagree with the characterization that high worst group accuracy corresponds to a combination of high I(Y,Z) and low I(A, Z). It looks like WG accuracy is mostly (but not fully) invariant to I(A, Z) in this plot, with the level-colour sets extending horizontally (more or less) across the plot Some of the notation in the proof in 3.2 is a little sloppy - in particular, y is overloaded in the definitions of L_wg and L_avg, both being used inside the scope of the expectation and outside it
ICLR
Title Multi-Precision Policy Enforced Training (MuPPET) : A precision-switching strategy for quantised fixed-point training of CNNs Abstract Large-scale convolutional neural networks (CNNs) suffer from very long training times, spanning from hours to weeks, limiting the productivity and experimentation of deep learning practitioners. As networks grow in size and complexity, training time is reduced through low-precision data representations and computations. However, in doing so the final accuracy suffers due to the problem of vanishing gradients. Existing state-of-the-art methods combat this issue by means of a mixed-precision approach utilising two different precision levels, FP32 (32-bit floating-point precision) and FP16/FP8 (16-/8-bit floating-point precision), leveraging the hardware support of recent GPU architectures for FP16 operations to obtain performance gains.This work pushes the boundary of quantised training by employing a multilevel optimisation approach that utilises multiple precisions including low-precision fixed-point representations. The novel training strategy, MuPPET, combines the use of multiple number representation regimes together with a precision-switching mechanism that decides at run time the transition point between precision regimes. Overall, the proposed strategy tailors the training process to the hardware-level capabilities of the utilised hardware architecture and yields improvements in training time and energy efficiency compared to state-ofthe-art approaches. Applying MuPPET on the training of AlexNet, ResNet18 and GoogLeNet on ImageNet (ILSVRC12) and targeting an NVIDIA Turing GPU, MuPPET achieves the same accuracy as standard full-precision training with an average training-time speedup of 1.28× across the networks. 1 INTRODUCTION Convolutional neural networks (CNNs) have demonstrated unprecedented accuracy in various machine learning tasks, from video understanding (Gan et al., 2015; He et al., 2018) to drone navigation (Loquercio et al., 2018; Kouris & Bouganis, 2018). To achieve such high levels of accuracy in inherently complex applications, current methodologies employ the design of large and complex CNN models (Szegedy et al., 2017; Huang et al., 2017) trained over large datasets (Deng et al., 2009; Lin et al., 2014). Nevertheless, the combination of large models and massive datasets results in long training times. This in turn leads to long turn-around times which limits the productivity of deep learning practitioners and prohibits wider experimentation. For instance, automatic tuning and search of neural architectures (Cai et al., 2018; Zhong et al., 2018) is a rapidly advancing area where accelerated training enables improving the produced networks. To counteract these long turn-around times, substantial research effort has been invested in hyperparameter tuning for the acceleration of training, with a particular focus on batch size and content. One line of work maximises memory and hardware utilisation by changing the batch size, in order to perform CNN training specific prefetching, scheduling and dependency improvement (Chen et al., 2019; Rhu et al., 2016). Other works focus on altering batch size or reconstructing minibatches to improve the convergence rate while sustaining high hardware utilisation (Devarakonda et al., 2017; Johnson & Guestrin, 2018; Peng et al., 2019), demonstrating up to 6.25× training speedup. A number of studies have focused on the use of reduced-precision training schemes. Reducedprecision arithmetic involves the utilisation of data formats that have smaller wordlengths than the conventional 32-bit floating-point (FP32) representation and is an approach for co-optimising pro- cessing speed, memory footprint and communication overhead. Existing literature can be categorised into works that use reduced precision to accelerate only the training stage while targeting an FP32 model, and those that produce networks with quantised weights. Regarding the former, Courbariaux et al. (2015) and Gupta et al. (2015) utilise dynamic quantisation and stochastic rounding respectively as a means to combat the accuracy loss due to quantisation. Nevertheless, the effectiveness of the proposed schemes have only been demonstrated on smallscale datasets such as CIFAR-10 and MNIST, and on a limited set of networks. Furthermore, the range of quantisation levels that has been explored varies greatly, with a number of works attacking the problem by focusing on mild quantisation levels such as half-precision floating-point (FP16) (Micikevicius et al., 2018), while others focus on lower precisions such as 8-bit floating-point (FP8) (Wang et al., 2018). Finally, quantisation has also been used as a means of reducing the memory and communication overhead in distributed training (De Sa et al., 2015; 2017; Alistarh et al., 2017). At the same time, the characteristics of modern CNN workloads and the trend towards quantised models have led to an emergence of specialised hardware processors, with support for low-precision arithmetic at the hardware level. From custom designs such as Google’s TPUs (Jouppi et al., 2017) and Microsoft’s FPGA-based Brainwave system (Fowers et al., 2018) to commodity devices such as NVIDIA’s Turing GPUs, existing platforms offer native support for reduced-precision data types including 16-bit floating-point (FP16), and 8- (INT8) and 4-bit (INT4) fixed-point, providing increased parallelism for lower bitwidths. Although these platforms have been mainly designed for the inference stage, the low-precision hardware offers significant opportunities for accelerating the time-consuming training stage. In this respect, there is an emerging need to provide training algorithms that can leverage these existing hardware optimisations and provide higher training speed. This work tackles the field of reduced-precision training at an algorithmic level. Independently of the number of quantisation levels chosen, or how extreme the quantisation is, this work proposes a metric that estimates the amount of information each new training step obtains for a given quantisation level, by capturing the diversity of the computed gradients across epochs. This enables the design of a policy that, given a set of quantisation levels, decides at run time appropriate points to increase the precision of the training process at that current instant without impacting the achieved test accuracy compared to training in FP32. Due to its agnostic nature, it remains orthogonal and complementary to existing low-precision training schemes. Furthermore, by pushing the precision below the 16-bit bitwidth of existing state-of-the-art techniques, the proposed method is able to leverage the lowprecision capabilities of modern processing systems to yield training speedups without penalising the resulting accuracy, significantly improving the time-to-accuracy trade-off. 2 BACKGROUND AND RELATED WORK The state-of-the-art method in training in reduced precision is mixed-precision training (Micikevicius et al., 2018). The authors propose to employ low-precision FP16 computations in the training stage of high-precision CNNs that perform inference in FP32. Along the training phase, the algorithm maintains a high-precision FP32 copy of the weights of the network, known as a master copy. At each minibatch, the inputs and weights are quantised to FP16 with all computations of the forward and backward pass performed in FP16, yielding memory footprint and runtime savings. Under this scheme, each stochastic gradient descent (SGD) update step entails accumulating FP16 gradients into the FP32 master copy of the weights, with this process performed iteratively throughout the training of the network. Micikevicius et al. (2018) evaluate their scheme over a set of state-of-theart models on ImageNet, and show that mixed-precision training with FP16 computations achieves comparable accuracy to standard FP32 training. Wang et al. (2018) also presented a method to train an FP32 model using 8-bit floating-point (FP8). The authors propose a hand-crafted FP8 data type, together with a chunk-based computation technique, and employ strategies such as stochastic rounding to alleviate the accuracy loss due to training at reduced precision. For AlexNet, ResNet18 and ResNet50 on ImageNet, Wang et al. (2018) demonstrates comparable accuracy to FP32 training while performing computations in FP8. Additionally the works presented in (Zhou et al., 2016; Chen et al., 2017) approach the problem of reduced-precision training employing fixed-point computations. FxpNet (Chen et al., 2017) was only evaluated on CIFAR-10, failing to demonstrate performance on more complex datasets such as ImageNet. DoReFa-net (Zhou et al., 2016) was tested on ImageNet but only ran on AlexNet missing out on state-of-the-art networks such as GoogLeNet and ResNet. All related works focus on accelerating the training of an FP32 model through reduced-precision computations. At the hardware level, 8-bit fixed-point multiplication uses 18.5× less energy and 27.5× less area with up to 4× lower multiplication times than FP32 (Sze et al., 2017). Consequently, this work attempts to push the boundaries of reduced-precision training by moving to reducedprecision fixed-point computations while updating an FP32 model. Preliminary tests (Sec. 4.4 for details) demonstrated that training solely in 8-bit fixed-point results in a significant degradation of validation accuracy compared to full FP32 training. This work aims to counteract this degradation by progressively increasing the precision of computations throughout training in an online manner determined by the proposed metric inspired by gradient diversity (Yin et al., 2018). Additionally by operating in an online fashion, MuPPET tailors the training process to best suit the particular network-dataset pair at each stage of the training process. Gradient diversity was introduced by Yin et al. (2018) as a metric of measuring the dissimilarity between sets of gradients that correspond to different minibatches. The gradient diversity of a set of gradients is defined as ∆S(w) = ∑n i=1 ||∇fi(w)||22 || ∑n i=1∇fi(w)||22 = ∑n i=1 ||∇fi(w)||22∑n i=1 ||∇fi(w)||22 + ∑ i 6=j〈∇fi(w),∇fj(w)〉 (1) where∇fi(w) represents the gradient of weights w for minibatch i. The key point to note in Eq. (1) is that the denominator contains the inner product between two gradients from different minibatches. Thus, orthogonal gradients would result in high gradient diversity, while similar gradients would result in low gradient diversity. The proposed framework, MuPPET, enhances this concept by considering gradients between minibatches across epochs and proposes the developed metric as a proxy for the amount of new information gained in each training step. Section 3 further expands on how gradient diversity is incorporated into the MuPPET algorithm. 3 METHODOLOGY 3.1 MULTILEVEL OPTIMISATION FOR TRAINING CNNS Conventionally, the training process of a CNN can be expressed as in Eq. (2). Given a CNN model f parameterised by a set of weights w ∈ RD, whereD is the number of weights of f , training involves a search for weight values that minimise the task-specific empirical loss, Loss, on the target dataset. Typically, a fixed arithmetic precision is employed across the training algorithm with FP32 currently being the de facto representation used by the deep learning community. min w(FP32)∈RD Loss(f(w(FP32))) (2) The proposed method follows a different approach by introducing a multilevel optimisation scheme (Migdalas et al., 2013) that leverages the performance gains of reduced-precision arithmetic. The single optimisation problem of Eq. (2) is transformed into a series of optimisation problems with each one employing different precision for computations, but maintaining weights storage at FP32 precision. Under this scheme, an N -level formulation comprises N sequential optimisation problems to be solved, with each level corresponding to a “finer” model. Overall, this formulation adds a hierarchical structure to the training stage, with an increasing arithmetic precision across the hierarchy of optimisation problems. Starting from the N -th problem, the inputs, weights, and activations of the CNN model f are quantised with precision qN , which is the lowest precision in the system and represents the coarsest version of the model. Each of the N levels progressively employs higher precision until the first level is reached, which corresponds to the original problem of Eq. (2). Formally, at the i-th level, the optimisation problem is formulated as min w(qi)∈V Loss(f(w(q i))) s.t. V = { w(q i) ∈ [LB,UB]D } (3) where LB and UB are the lower and upper bound in the representational range of precision qi. The target CNN model f uses a set of weights quantised with precision qi and hence the solution of this optimisation problem can be interpreted as an approximation to the original problem of Eq. (2). To transition from one level to the next, the result of each level of optimisation is employed as a starting point for the next level, up to the final outermost optimisation that reduces to Eq. (2). 3.2 THE MUPPET ALGORITHM Fig. 1 presents the process of training a CNN using the proposed algorithm. All figures in this paper are shown in the Appendix B at a larger scale for enhanced readability. Within each epoch, MuPPET performs mixed-precision training where the weights are stored in an FP32 master copy and are quantised to the desired fixed-point precision on-the-fly. At epoch j, the computations for the forward and backward passes (F and B blocks respectively) are performed at the current quantised precision (qj) and the ac- tivations as well as the gradients obtained from each layer are quantised by the quantiser module before being passed on to the next layer, or stored. After each minibatch, the full-precision master copy of the weights is updated using a quantised gradient matrix. As discussed in Section 3.1, the quantisation level is gradually increased over the period of the training. In MuPPET, switching between these optimisation levels at the correct times is crucial in order not to compromise the final validation accuracy. In this respect, MuPPET introduces a precision switching policy based on an inter-epoch gradient diversity (Yin et al., 2018) metric that dictates when to switch to the next precision. Details of the switching policy are presented in Section 3.3. 3.2.1 QUANTISATION STRATEGY In order to implement quantised training, a quantisation strategy needs to be defined. The proposed dynamic quantisation strategy utilises block floating-point arithmetic (also known as dynamic fixedpoint), where each fixed-point number is represented as a pair of an WLnet-bit signed integer x and a scale factor s, such that the value is represented as x× 2−s. During the forward and backward passes of the training process, the weights and feature maps are both quantised, and the multiplication operations are performed at the same low precision. The quantisation method employs a stochastic rounding methodology (Gupta et al., 2015). The accumulation stage of the matrix-multiply operation is accumulated into a 32-bit fixed-point value to prevent overflow on the targeted networks.1 The result of this matrix multiplication is subsequently quantised to the target wordlength before being passed as input to the next layer. Following the block floating-point scheme, quantisation is performed such that each weight and feature map matrix in the network has a single scale factor shared by all values within the matrix. The quantisation configuration for the i-th level of optimisation and the l-th layer, qil , and the full set of configurations, qi, are given by left- and right-hand side of Eq. (4) respectively. qil = 〈 WLnet, sweightsl , s act l 〉i , ∀l ∈ [1, |L|] and qi = 〈 qil | ∀l ∈ [1, |L|] 〉 (4) where |L| is the number of layers of the target network, WLnet is the fixed wordlength across the network, sweightsl and s act l are the scaling factors for the weights and activations respectively, of the lth layer for the i-th level of optimisation. As a result, for N levels, there are N distinct quantisation schemes; N − 1 of these schemes are with varying fixed-point precisions, and the finest level of quantisation, q1, is single-precision floating-point (FP32). The scaling factor for a matrix X is first calculated as shown in Eq. (5) and individual elements are quantised as in Eq. (6). s{weights, act} = ⌊ log2 ( min ( UB + 0.5 X {weights, act} max , LB− 0.5 X {weights, act} min ))⌋ (5) x {weights, act} quant = ⌊ x{weights, act} · 2s {weights, act} + Unif (−0.5, 0.5) ⌉ (6) 1The accumulator wordlength is large enough to accommodate the current CNN models, without overflow. where X{weights, act}{max, min} is either the maximum or minimum value in the weights or feature maps matrix of the current layer, LB and UB are the lower and upper bound of the current wordlength WLnet, and Unif(a,b) represents sampling from the uniform distribution in the range [a,b]. Eq. (5) adds 0.5 and −0.5 to UB and LB respectively to ensure maximum utilisation of WLnet. 3.2.2 INFORMATION TRANSFER BETWEEN LEVELS Employing multilevel training for CNNs requires an appropriate mechanism for transferring information between levels. To achieve this, the proposed optimiser maintains a master copy of the weights in full precision (FP32) throughout the optimisation levels. Similar to mixed-precision training (Micikevicius et al., 2018), at each level the SGD update step is performed by accumulating a fixed-point gradient value into the FP32 master copy of the weights. Starting from the coarsest quantisation level i = N , to transfer the solution from level i to level i − 1, the master copy is quantised using the quantisation scheme qi−1. With this approach, the weights are maintained in FP32 and are quantised on-the-fly during run time in order to be utilised in each training step. 3.3 PRECISION SWITCHING POLICY The metric to decide when to switch between levels of quantisation is inspired by Yin et al. (2018) and based on the concept of gradient diversity (Eq. (1)). MuPPET computes ∆S(w) between gradients obtained across epochs as a proxy to measure the information that is obtained during the training process; the lower the diversity between the gradients, the less information this level of quantisation provides towards the training of the model. Therefore, the proposed method comprises a novel normalised inter-epoch version of the gradient diversity along with a run-time policy to determine the epochs to switch precision. The following policy is employed to determine when a precision switch is to be performed. For a network with layers L and a quantisation scheme qi that was switched into at epoch e: 1. For each epoch j and each layer l ∈ L, the last minibatch’s gradient,∇f jl (w), is stored. 2. After r (resolution) number of epochs, the inter-epoch gradient diversity at epoch j is ∆S(w)j = ∑ ∀l∈L ∑j k=j−r ||∇f k l (w)|| 2 2 || ∑j k=j−r ∇f k l (w)|| 2 2 |L| (7) 3. At an epoch j, given a set of gradient diversities S(j) = { ∆S(w)i ∀ e ≤ i < j } , the ratio p = maxS(j)∆S(w)j is calculated. 4. An empirically determined decaying threshold T = α+βe−λj (8) is placed on the ratio p. 5. If the p violates T more than γ times, a precision switch is triggered and S(j) = ∅. As long as the gradients across epochs remain diverse, ∆S(w)j (Eq.(7)) at the denominator of p sustains a high value and the value of p remains low. However, when the gradients across epochs become similar, ∆S(w)j decreases and the value of p becomes larger. Generalisability across epochs is obtained as p accounts for the change in information relative to the maximum information available since the last precision change. Hence, the metric acknowledges the presence of temporal variations in information provided by the gradients. Generalisability across networks and datasets is maintained as p measures a ratio. Consequently, the absolute values of gradients which could vary between networks and datasets, matter less. Overall, MuPPET employs the metric p as a mechanism to trigger a precision switch whenever p violates threshold T more than γ times. The likelihood of observing r gradients across r epochs that have low gradient diversity, especially at early stages of training is low. The intuition applied here is that when this does happen at a given precision, it may be an indication that information is being lost due to quantisation and thus corresponds to a high p value, which argues to move to a higher bitwidth. 3.3.1 HYPERPARAMETERS The hyperparameters for the proposed MuPPET algorithm are the following: 1) values of α, β, and λ that define the decaying threshold from Eq. (8), 2) the number of threshold violations allowed before the precision change is triggered (γ), 3) the resolution r, 4) the set of precisions at which training is performed, and 5) the epochs at which the learning rate is changed. The values of α, β, λ, r, and γ were set at 1, 1.5, 0.1, 3, and 2 respectively after empirical cross-validation. These were tuned by running training on AlexNet and ResNet20 on the CIFAR-10 dataset. All MuPPET hyperparameters remain the same regardless of network or dataset. Regarding training hyperparameters, batch size was increased from 128 to 256 going from CIFAR-10 to ImageNet. All other training hyperparameters, including learning rate remained constant. Analysis of generalisability and the training hyperparameters used are presented in Section 4.1. The empirically-chosen quantised precisions at which training was performed were 8-, 12-, 14- and 16-bit fixed-point. Precisions below this did not result in any progress towards convergence for any network. Overall, MuPPET introduces a policy that allows to decide at run time an appropriate point to switch between quantisation levels. After training at 16-bit fixed-point, the rest of the training is performed at FP32 until the desired validation accuracy is reached. Decaying the learning rate causes a finer exploration of the optimisation space as does increasing the quantisation level. Therefore, the learning rate was kept constant during quantised training and was decayed only after switching to FP32. 4 EVALUATION OF MUPPET 4.1 GENERALISABILITY The MuPPET framework was evaluated on its applicability across epochs, networks and datasets. Fig. 2 shows the value of the metric p over the epochs in blue, and the decaying threshold described in Eq. (8) in orange. The number of epochs for which training in each precision was performed is shown by the various overlay colours. The first violation is denoted by a red dot and the second violation is not seen as it occurs exactly at the point of switching. The graphs show that across various networks and datasets, the values of p stay relatively similar, backing the choice of a universal decaying factor. Furthermore, empirical results for CIFAR-10 indicated that changing from one fixed-point precision to another too early in the training process had a negative impact on the final validation accuracy. Using a decaying threshold ensures that the value of p needs to be much higher in the initial epochs to trigger a precision change due to the volatility of p in early epochs of training. 4.2 PERFORMANCE EVALUATION The accuracy results presented in this section utilised the proposed stochastic quantisation strategy. The methodology was developed using PyTorch. As the framework does not natively support lowprecision implementations, all quantisation and computations corresponding to 8-, 12-, 14-, and 16- bit precisions were performed through emulation on floating-point hardware. All hyperparameters not specified below were left as PyTorch defaults. For all networks, an SGD optimiser was used with batch sizes 128 on CIFAR-10 or 256 on ImageNet, momentum of 0.9 and weight decay of 1e−4. As a baseline, an FP32 model with identical hyperparameters (except for batch size) was trained. The baseline FP32 training was performed by training for 150 epochs and reducing the learning rate by a factor of 10 at epochs 50 and 100. In order to achieve comparable final validation accuracy to the FP32 baseline, once MuPPET triggered a precision change out of 16-bit fixed-point, 45 training epochs at FP32 precision were performed. The learning rate was reduced by a factor of 10 every 15 FP32 training epochs. For AlexNet, ResNet18, ResNet20, and GoogLeNet, the initial learning rate was set to 0.01, 0.1, 0.1, and 0.001 respectively. The detailed breakdown of the ImageNet training runs with training and validation loss curves can be found in the Appendix A. Table 1 presents the achieved Top-1 validation accuracy of MuPPET and the FP32 baseline, together with the accuracy difference in percentage points (pp). As shown on the table, MuPPET is able to provide comparable Top-1 validation accuracy to standard FP32 training across both networks and datasets. Due to a sub-optimal training setup of GoogLeNet on ImageNet, the baseline and MuPPET training severely underperformed compared to the reported state-of-the-art works. Nevertheless, the results demonstrate the quality of training with MuPPET using identical hyperparameters. As a result, MuPPET’s performance demonstrates the effectiveness of the precision switching strategy in achieving significant acceleration of training time (Section 4.3) at negligible cost in accuracy by running many epochs at lower precision, particularly on very large datasets. 4.3 WALL-CLOCK TIME IMPROVEMENTS This section explores the gains in estimated wall-clock time of the current implementation of MuPPET (Current Impl.) with respect to baseline FP32 training, Mixed Precision by Micikevicius et al. (2018) and MuPPET’s ideal implementation (Table 2). For all performance results, the target platform was an NVIDIA RTX 2080 Ti GPU. At the moment, deep learning frameworks, such as PyTorch, do not provide native support for reduced-precision hardware. Consequently, the wall-clock times in Table 2 were estimated using a performance model developed with NVIDIA’s CUTLASS library (Kerr et al., 2018) for reduced-precision general matrix-multiplication (GEMM) employing the latest Turing architecture GPUs. The GEMMs that were accelerated were in the convolutional and fully-connected layers of each network. INT8 hardware was used to profile the 8-bit fixed-point computations, while FP16 hardware was used to profile 12-, 14-, and 16-bit fixed-point computations as well as Mixed Precision (Micikevicius et al., 2018) wall-clock time. CUTLASS (Kerr et al., 2018) natively implements bit-packing to capitalise on improved memory-bandwidth utilisation. The model for the current implementation is limited by the fact that frameworks force quantisation to happen to and from FP32. For the MuPPET (Ideal) scenario, the model assumes native hardware utilisation which would reduce the overhead by removing this restriction. As shown on Table 2, MuPPET consistently achieves 1.25-1.32× speedup over the FP32 baseline across the networks when targeting ImageNet on the given GPU. With respect to Mixed Precision, the proposed method outperforms it on AlexNet by 1.23× and delivers comparable performance for ResNet18 and GoogLeNet. Currently, the absence of native quantisation support, and hence the necessity to emulate quantisation and the associated overheads, is the limiting factor for MuPPET to achieve higher processing speed. In this respect, MuPPET run on native hardware would yield 1.05× and 1.48× speedup for ResNet18 and GoogLeNet respectively compared to Mixed Precision. As a result, MuPPET demonstrates consistently faster time-to-accuracy (Coleman et al., 2019) compared to Mixed Precision across the benchmarks. Additionally, while Mixed Precision has already reached its limit by using FP16 on FP16-native GPUs, the 8-, 12-, 14- and 16-bit fixed-point computations enabled by MuPPET leave space for further potential speedup when targeting next- and currentgeneration (Fowers et al., 2018) precision-optimised fixed-point platforms. Similar to the analysis in Section 4.2, Micikevicius et al. (2018) and Wang et al. (2018) compare their schemes to baseline FP32 training performed by them. The reported results demonstrate that their methods achieve similar accuracy results to our method by lying close to the respective FP32 training accuracy. As Wang et al. (2018) do not provide any results in terms of gains in wall-clock times and since they use custom FP8 hardware, their work could not be directly compared to our method. 4.4 PRECISION SWITCHING To evaluate the ability of MuPPET to effectively choose an epoch to switch precision at, AlexNet and ResNet20 were first trained using MuPPET on the CIFAR-100 dataset. The hyperparameters for MuPPET were kept the same across all runs. From the results it was noted that training at reduced precision and not switching at all causes a drop in validation accuracy of 1.4% and 1.3% for AlexNet and ResNet20 respectively, hence demonstrating the need to switch precisions when training at bit-widths as low as 8-bit fixed-point. To demonstrate the benefits of a precision switching methodology, two further sets of experiments were conducted on ResNet20 using CIFAR100 as depicted in Fig. 3. First, 34 training runs were performed (34 red dots in Fig. 3), where for each training four epochs along the standard training duration were randomly selected and used as the switching points. Second, the switching strategy MuPPET generated for AlexNet and GoogLeNet was applied to ResNet20 (2 blue dots in Fig. 3). Fig. 3 shows the best test accuracy achieved by each of the runs and the training time as estimated by our performance model described in Sec. 4.3. It shows that for a given time-budget, MuPPET runs (6 green dots) outperform on average all other experiment sets, demonstrating the need for a precision switching policy that is real-time and agnostic to network and dataset in order to achieve a good accuracy-to-training-time trade-off. 5 CONCLUSION This paper proposes MuPPET, a novel low-precision CNN training scheme that combines the use of fixed-point and floating-point representations to produce a network trained for FP32 inference. By introducing a precision-switching mechanism that decides at run time an appropriate transition point between different precision regimes, the proposed framework achieves Top-1 validation accuracies comparable to that achieved by state-of-the-art FP32 training regimes while delivering significant speedup in terms of training time. Quantitative evaluation demonstrates that MuPPET’s training strategy generalises across CNN architectures and datasets by adapting the training process to the target CNN-dataset pair during run time. Overall, MuPPET enables the utilisation of the lowprecision hardware units available on modern specialised processors, such as next-generation GPUs, FPGAs and TPUs, to yield improvements in training time and energy efficiency without impacting the resulting accuracy. Future work will focus on applying the proposed framework to the training of LSTMs, where the training process is more sensitive to gradient quantisation, as well as on the extension of MuPPET to include batch size and learning rate as part of its hyperparameters. Furthermore, we will explore improved quantisation techniques that could enable training convergence for bitwidths even lower than 8-bit fixed-point. A APPENDIX A The graphs in Fig. 4, 5 and 6 demonstrate both the training and validation loss of AlexNet, ResNet18 and GoogLeNet for MuPPET and FP32 runs on the ImageNet dataset. For each graph, the light gray lines indicate the point at which precision was switched in the MuPPET run. The green lines are used to show MuPPET behaviour and blue to show FP32 behaviour. Solid lines show validation loss and dashed lines show training loss. B APPENDIX B This section contains the larger versions of all the figures in the paper for enhanced clarity.
1. What is the novel contribution of the paper regarding dynamic switching between precision levels? 2. How does the approach impact the performance of trained networks, specifically regarding gradient diversity, choice of p and threshold parameters? 3. Can the pre-defined switching points between precision levels generalize across networks and datasets? 4. What are the details of the quantization scheme used in the paper, including determining scaling factors SC and the difference/relation between n and WL? 5. Can the authors provide clarification on various aspects of the paper, such as Equation 3, the algorithm in Section 3.3, Table 1, and Table 3? 6. Are there any specific concerns or limitations with the experimental design or results presented in the paper?
Review
Review The article presents an approach to reduce the precision of weights, activations and gradients to speed up the training of deep neural networks. The precision of these values is increased according to a dynamic schedule such that the original classification accuracy is reached after training. The manuscript is in most parts well written and the addressed topic is of general interest for the research community represented at ICRL. Still, I recommend a weak reject, since the core idea of the manuscript, i.e. the dynamic switching between precision levels, is not shown to be a necessary condition for good classification results. Major points: • The introduction does not give a clear statement about the novel contribution of the paper. Only the very last paragraph is specific about the paper. • Your results support that step-wise increasing the resolution speeds up training without significant losses in accuracy. However, the impact of the gradient diversity, choice of p and threshold parameters on the performance of the trained networks are unclear. What is the isolated impact of every of these choices? According to Figure 2, pre-defined switching points between precision levels may also generalize between networks and datasets. • The description of the quantization scheme is not clear enough in order to reproduce the results: o Please give details about every step from FP32 to FPx values or cite appropriate literature. o Equation 4 and 5: How are the scaling factors SC determined? o Please clarify the difference/relation between n and WL. Minor points: • Equation 3: What does “represent. range(q^i)” mean? • Text in Figure 1 and 2 is far too small and barely readable • Step 5 in Algorithm in Section 3.3: What does “p violates y more than gamma times” mean? What is y? • Please clarify “distribution approach”. Distribution of what? • Table 1: For the baseline experiments, the precision is switched from 8 to 32 bits, for MuPPET from 8 to 12 bits (see main text). What is the motivation behind these different choices? • Do you use any type of data augmentation? • Table 3: Please clarify “theoretical limit”. Does this limit include 12 and 14 bit quantisation. What do you mean by “optimized quantization implementation” in main text?
ICLR
Title Multi-Precision Policy Enforced Training (MuPPET) : A precision-switching strategy for quantised fixed-point training of CNNs Abstract Large-scale convolutional neural networks (CNNs) suffer from very long training times, spanning from hours to weeks, limiting the productivity and experimentation of deep learning practitioners. As networks grow in size and complexity, training time is reduced through low-precision data representations and computations. However, in doing so the final accuracy suffers due to the problem of vanishing gradients. Existing state-of-the-art methods combat this issue by means of a mixed-precision approach utilising two different precision levels, FP32 (32-bit floating-point precision) and FP16/FP8 (16-/8-bit floating-point precision), leveraging the hardware support of recent GPU architectures for FP16 operations to obtain performance gains.This work pushes the boundary of quantised training by employing a multilevel optimisation approach that utilises multiple precisions including low-precision fixed-point representations. The novel training strategy, MuPPET, combines the use of multiple number representation regimes together with a precision-switching mechanism that decides at run time the transition point between precision regimes. Overall, the proposed strategy tailors the training process to the hardware-level capabilities of the utilised hardware architecture and yields improvements in training time and energy efficiency compared to state-ofthe-art approaches. Applying MuPPET on the training of AlexNet, ResNet18 and GoogLeNet on ImageNet (ILSVRC12) and targeting an NVIDIA Turing GPU, MuPPET achieves the same accuracy as standard full-precision training with an average training-time speedup of 1.28× across the networks. 1 INTRODUCTION Convolutional neural networks (CNNs) have demonstrated unprecedented accuracy in various machine learning tasks, from video understanding (Gan et al., 2015; He et al., 2018) to drone navigation (Loquercio et al., 2018; Kouris & Bouganis, 2018). To achieve such high levels of accuracy in inherently complex applications, current methodologies employ the design of large and complex CNN models (Szegedy et al., 2017; Huang et al., 2017) trained over large datasets (Deng et al., 2009; Lin et al., 2014). Nevertheless, the combination of large models and massive datasets results in long training times. This in turn leads to long turn-around times which limits the productivity of deep learning practitioners and prohibits wider experimentation. For instance, automatic tuning and search of neural architectures (Cai et al., 2018; Zhong et al., 2018) is a rapidly advancing area where accelerated training enables improving the produced networks. To counteract these long turn-around times, substantial research effort has been invested in hyperparameter tuning for the acceleration of training, with a particular focus on batch size and content. One line of work maximises memory and hardware utilisation by changing the batch size, in order to perform CNN training specific prefetching, scheduling and dependency improvement (Chen et al., 2019; Rhu et al., 2016). Other works focus on altering batch size or reconstructing minibatches to improve the convergence rate while sustaining high hardware utilisation (Devarakonda et al., 2017; Johnson & Guestrin, 2018; Peng et al., 2019), demonstrating up to 6.25× training speedup. A number of studies have focused on the use of reduced-precision training schemes. Reducedprecision arithmetic involves the utilisation of data formats that have smaller wordlengths than the conventional 32-bit floating-point (FP32) representation and is an approach for co-optimising pro- cessing speed, memory footprint and communication overhead. Existing literature can be categorised into works that use reduced precision to accelerate only the training stage while targeting an FP32 model, and those that produce networks with quantised weights. Regarding the former, Courbariaux et al. (2015) and Gupta et al. (2015) utilise dynamic quantisation and stochastic rounding respectively as a means to combat the accuracy loss due to quantisation. Nevertheless, the effectiveness of the proposed schemes have only been demonstrated on smallscale datasets such as CIFAR-10 and MNIST, and on a limited set of networks. Furthermore, the range of quantisation levels that has been explored varies greatly, with a number of works attacking the problem by focusing on mild quantisation levels such as half-precision floating-point (FP16) (Micikevicius et al., 2018), while others focus on lower precisions such as 8-bit floating-point (FP8) (Wang et al., 2018). Finally, quantisation has also been used as a means of reducing the memory and communication overhead in distributed training (De Sa et al., 2015; 2017; Alistarh et al., 2017). At the same time, the characteristics of modern CNN workloads and the trend towards quantised models have led to an emergence of specialised hardware processors, with support for low-precision arithmetic at the hardware level. From custom designs such as Google’s TPUs (Jouppi et al., 2017) and Microsoft’s FPGA-based Brainwave system (Fowers et al., 2018) to commodity devices such as NVIDIA’s Turing GPUs, existing platforms offer native support for reduced-precision data types including 16-bit floating-point (FP16), and 8- (INT8) and 4-bit (INT4) fixed-point, providing increased parallelism for lower bitwidths. Although these platforms have been mainly designed for the inference stage, the low-precision hardware offers significant opportunities for accelerating the time-consuming training stage. In this respect, there is an emerging need to provide training algorithms that can leverage these existing hardware optimisations and provide higher training speed. This work tackles the field of reduced-precision training at an algorithmic level. Independently of the number of quantisation levels chosen, or how extreme the quantisation is, this work proposes a metric that estimates the amount of information each new training step obtains for a given quantisation level, by capturing the diversity of the computed gradients across epochs. This enables the design of a policy that, given a set of quantisation levels, decides at run time appropriate points to increase the precision of the training process at that current instant without impacting the achieved test accuracy compared to training in FP32. Due to its agnostic nature, it remains orthogonal and complementary to existing low-precision training schemes. Furthermore, by pushing the precision below the 16-bit bitwidth of existing state-of-the-art techniques, the proposed method is able to leverage the lowprecision capabilities of modern processing systems to yield training speedups without penalising the resulting accuracy, significantly improving the time-to-accuracy trade-off. 2 BACKGROUND AND RELATED WORK The state-of-the-art method in training in reduced precision is mixed-precision training (Micikevicius et al., 2018). The authors propose to employ low-precision FP16 computations in the training stage of high-precision CNNs that perform inference in FP32. Along the training phase, the algorithm maintains a high-precision FP32 copy of the weights of the network, known as a master copy. At each minibatch, the inputs and weights are quantised to FP16 with all computations of the forward and backward pass performed in FP16, yielding memory footprint and runtime savings. Under this scheme, each stochastic gradient descent (SGD) update step entails accumulating FP16 gradients into the FP32 master copy of the weights, with this process performed iteratively throughout the training of the network. Micikevicius et al. (2018) evaluate their scheme over a set of state-of-theart models on ImageNet, and show that mixed-precision training with FP16 computations achieves comparable accuracy to standard FP32 training. Wang et al. (2018) also presented a method to train an FP32 model using 8-bit floating-point (FP8). The authors propose a hand-crafted FP8 data type, together with a chunk-based computation technique, and employ strategies such as stochastic rounding to alleviate the accuracy loss due to training at reduced precision. For AlexNet, ResNet18 and ResNet50 on ImageNet, Wang et al. (2018) demonstrates comparable accuracy to FP32 training while performing computations in FP8. Additionally the works presented in (Zhou et al., 2016; Chen et al., 2017) approach the problem of reduced-precision training employing fixed-point computations. FxpNet (Chen et al., 2017) was only evaluated on CIFAR-10, failing to demonstrate performance on more complex datasets such as ImageNet. DoReFa-net (Zhou et al., 2016) was tested on ImageNet but only ran on AlexNet missing out on state-of-the-art networks such as GoogLeNet and ResNet. All related works focus on accelerating the training of an FP32 model through reduced-precision computations. At the hardware level, 8-bit fixed-point multiplication uses 18.5× less energy and 27.5× less area with up to 4× lower multiplication times than FP32 (Sze et al., 2017). Consequently, this work attempts to push the boundaries of reduced-precision training by moving to reducedprecision fixed-point computations while updating an FP32 model. Preliminary tests (Sec. 4.4 for details) demonstrated that training solely in 8-bit fixed-point results in a significant degradation of validation accuracy compared to full FP32 training. This work aims to counteract this degradation by progressively increasing the precision of computations throughout training in an online manner determined by the proposed metric inspired by gradient diversity (Yin et al., 2018). Additionally by operating in an online fashion, MuPPET tailors the training process to best suit the particular network-dataset pair at each stage of the training process. Gradient diversity was introduced by Yin et al. (2018) as a metric of measuring the dissimilarity between sets of gradients that correspond to different minibatches. The gradient diversity of a set of gradients is defined as ∆S(w) = ∑n i=1 ||∇fi(w)||22 || ∑n i=1∇fi(w)||22 = ∑n i=1 ||∇fi(w)||22∑n i=1 ||∇fi(w)||22 + ∑ i 6=j〈∇fi(w),∇fj(w)〉 (1) where∇fi(w) represents the gradient of weights w for minibatch i. The key point to note in Eq. (1) is that the denominator contains the inner product between two gradients from different minibatches. Thus, orthogonal gradients would result in high gradient diversity, while similar gradients would result in low gradient diversity. The proposed framework, MuPPET, enhances this concept by considering gradients between minibatches across epochs and proposes the developed metric as a proxy for the amount of new information gained in each training step. Section 3 further expands on how gradient diversity is incorporated into the MuPPET algorithm. 3 METHODOLOGY 3.1 MULTILEVEL OPTIMISATION FOR TRAINING CNNS Conventionally, the training process of a CNN can be expressed as in Eq. (2). Given a CNN model f parameterised by a set of weights w ∈ RD, whereD is the number of weights of f , training involves a search for weight values that minimise the task-specific empirical loss, Loss, on the target dataset. Typically, a fixed arithmetic precision is employed across the training algorithm with FP32 currently being the de facto representation used by the deep learning community. min w(FP32)∈RD Loss(f(w(FP32))) (2) The proposed method follows a different approach by introducing a multilevel optimisation scheme (Migdalas et al., 2013) that leverages the performance gains of reduced-precision arithmetic. The single optimisation problem of Eq. (2) is transformed into a series of optimisation problems with each one employing different precision for computations, but maintaining weights storage at FP32 precision. Under this scheme, an N -level formulation comprises N sequential optimisation problems to be solved, with each level corresponding to a “finer” model. Overall, this formulation adds a hierarchical structure to the training stage, with an increasing arithmetic precision across the hierarchy of optimisation problems. Starting from the N -th problem, the inputs, weights, and activations of the CNN model f are quantised with precision qN , which is the lowest precision in the system and represents the coarsest version of the model. Each of the N levels progressively employs higher precision until the first level is reached, which corresponds to the original problem of Eq. (2). Formally, at the i-th level, the optimisation problem is formulated as min w(qi)∈V Loss(f(w(q i))) s.t. V = { w(q i) ∈ [LB,UB]D } (3) where LB and UB are the lower and upper bound in the representational range of precision qi. The target CNN model f uses a set of weights quantised with precision qi and hence the solution of this optimisation problem can be interpreted as an approximation to the original problem of Eq. (2). To transition from one level to the next, the result of each level of optimisation is employed as a starting point for the next level, up to the final outermost optimisation that reduces to Eq. (2). 3.2 THE MUPPET ALGORITHM Fig. 1 presents the process of training a CNN using the proposed algorithm. All figures in this paper are shown in the Appendix B at a larger scale for enhanced readability. Within each epoch, MuPPET performs mixed-precision training where the weights are stored in an FP32 master copy and are quantised to the desired fixed-point precision on-the-fly. At epoch j, the computations for the forward and backward passes (F and B blocks respectively) are performed at the current quantised precision (qj) and the ac- tivations as well as the gradients obtained from each layer are quantised by the quantiser module before being passed on to the next layer, or stored. After each minibatch, the full-precision master copy of the weights is updated using a quantised gradient matrix. As discussed in Section 3.1, the quantisation level is gradually increased over the period of the training. In MuPPET, switching between these optimisation levels at the correct times is crucial in order not to compromise the final validation accuracy. In this respect, MuPPET introduces a precision switching policy based on an inter-epoch gradient diversity (Yin et al., 2018) metric that dictates when to switch to the next precision. Details of the switching policy are presented in Section 3.3. 3.2.1 QUANTISATION STRATEGY In order to implement quantised training, a quantisation strategy needs to be defined. The proposed dynamic quantisation strategy utilises block floating-point arithmetic (also known as dynamic fixedpoint), where each fixed-point number is represented as a pair of an WLnet-bit signed integer x and a scale factor s, such that the value is represented as x× 2−s. During the forward and backward passes of the training process, the weights and feature maps are both quantised, and the multiplication operations are performed at the same low precision. The quantisation method employs a stochastic rounding methodology (Gupta et al., 2015). The accumulation stage of the matrix-multiply operation is accumulated into a 32-bit fixed-point value to prevent overflow on the targeted networks.1 The result of this matrix multiplication is subsequently quantised to the target wordlength before being passed as input to the next layer. Following the block floating-point scheme, quantisation is performed such that each weight and feature map matrix in the network has a single scale factor shared by all values within the matrix. The quantisation configuration for the i-th level of optimisation and the l-th layer, qil , and the full set of configurations, qi, are given by left- and right-hand side of Eq. (4) respectively. qil = 〈 WLnet, sweightsl , s act l 〉i , ∀l ∈ [1, |L|] and qi = 〈 qil | ∀l ∈ [1, |L|] 〉 (4) where |L| is the number of layers of the target network, WLnet is the fixed wordlength across the network, sweightsl and s act l are the scaling factors for the weights and activations respectively, of the lth layer for the i-th level of optimisation. As a result, for N levels, there are N distinct quantisation schemes; N − 1 of these schemes are with varying fixed-point precisions, and the finest level of quantisation, q1, is single-precision floating-point (FP32). The scaling factor for a matrix X is first calculated as shown in Eq. (5) and individual elements are quantised as in Eq. (6). s{weights, act} = ⌊ log2 ( min ( UB + 0.5 X {weights, act} max , LB− 0.5 X {weights, act} min ))⌋ (5) x {weights, act} quant = ⌊ x{weights, act} · 2s {weights, act} + Unif (−0.5, 0.5) ⌉ (6) 1The accumulator wordlength is large enough to accommodate the current CNN models, without overflow. where X{weights, act}{max, min} is either the maximum or minimum value in the weights or feature maps matrix of the current layer, LB and UB are the lower and upper bound of the current wordlength WLnet, and Unif(a,b) represents sampling from the uniform distribution in the range [a,b]. Eq. (5) adds 0.5 and −0.5 to UB and LB respectively to ensure maximum utilisation of WLnet. 3.2.2 INFORMATION TRANSFER BETWEEN LEVELS Employing multilevel training for CNNs requires an appropriate mechanism for transferring information between levels. To achieve this, the proposed optimiser maintains a master copy of the weights in full precision (FP32) throughout the optimisation levels. Similar to mixed-precision training (Micikevicius et al., 2018), at each level the SGD update step is performed by accumulating a fixed-point gradient value into the FP32 master copy of the weights. Starting from the coarsest quantisation level i = N , to transfer the solution from level i to level i − 1, the master copy is quantised using the quantisation scheme qi−1. With this approach, the weights are maintained in FP32 and are quantised on-the-fly during run time in order to be utilised in each training step. 3.3 PRECISION SWITCHING POLICY The metric to decide when to switch between levels of quantisation is inspired by Yin et al. (2018) and based on the concept of gradient diversity (Eq. (1)). MuPPET computes ∆S(w) between gradients obtained across epochs as a proxy to measure the information that is obtained during the training process; the lower the diversity between the gradients, the less information this level of quantisation provides towards the training of the model. Therefore, the proposed method comprises a novel normalised inter-epoch version of the gradient diversity along with a run-time policy to determine the epochs to switch precision. The following policy is employed to determine when a precision switch is to be performed. For a network with layers L and a quantisation scheme qi that was switched into at epoch e: 1. For each epoch j and each layer l ∈ L, the last minibatch’s gradient,∇f jl (w), is stored. 2. After r (resolution) number of epochs, the inter-epoch gradient diversity at epoch j is ∆S(w)j = ∑ ∀l∈L ∑j k=j−r ||∇f k l (w)|| 2 2 || ∑j k=j−r ∇f k l (w)|| 2 2 |L| (7) 3. At an epoch j, given a set of gradient diversities S(j) = { ∆S(w)i ∀ e ≤ i < j } , the ratio p = maxS(j)∆S(w)j is calculated. 4. An empirically determined decaying threshold T = α+βe−λj (8) is placed on the ratio p. 5. If the p violates T more than γ times, a precision switch is triggered and S(j) = ∅. As long as the gradients across epochs remain diverse, ∆S(w)j (Eq.(7)) at the denominator of p sustains a high value and the value of p remains low. However, when the gradients across epochs become similar, ∆S(w)j decreases and the value of p becomes larger. Generalisability across epochs is obtained as p accounts for the change in information relative to the maximum information available since the last precision change. Hence, the metric acknowledges the presence of temporal variations in information provided by the gradients. Generalisability across networks and datasets is maintained as p measures a ratio. Consequently, the absolute values of gradients which could vary between networks and datasets, matter less. Overall, MuPPET employs the metric p as a mechanism to trigger a precision switch whenever p violates threshold T more than γ times. The likelihood of observing r gradients across r epochs that have low gradient diversity, especially at early stages of training is low. The intuition applied here is that when this does happen at a given precision, it may be an indication that information is being lost due to quantisation and thus corresponds to a high p value, which argues to move to a higher bitwidth. 3.3.1 HYPERPARAMETERS The hyperparameters for the proposed MuPPET algorithm are the following: 1) values of α, β, and λ that define the decaying threshold from Eq. (8), 2) the number of threshold violations allowed before the precision change is triggered (γ), 3) the resolution r, 4) the set of precisions at which training is performed, and 5) the epochs at which the learning rate is changed. The values of α, β, λ, r, and γ were set at 1, 1.5, 0.1, 3, and 2 respectively after empirical cross-validation. These were tuned by running training on AlexNet and ResNet20 on the CIFAR-10 dataset. All MuPPET hyperparameters remain the same regardless of network or dataset. Regarding training hyperparameters, batch size was increased from 128 to 256 going from CIFAR-10 to ImageNet. All other training hyperparameters, including learning rate remained constant. Analysis of generalisability and the training hyperparameters used are presented in Section 4.1. The empirically-chosen quantised precisions at which training was performed were 8-, 12-, 14- and 16-bit fixed-point. Precisions below this did not result in any progress towards convergence for any network. Overall, MuPPET introduces a policy that allows to decide at run time an appropriate point to switch between quantisation levels. After training at 16-bit fixed-point, the rest of the training is performed at FP32 until the desired validation accuracy is reached. Decaying the learning rate causes a finer exploration of the optimisation space as does increasing the quantisation level. Therefore, the learning rate was kept constant during quantised training and was decayed only after switching to FP32. 4 EVALUATION OF MUPPET 4.1 GENERALISABILITY The MuPPET framework was evaluated on its applicability across epochs, networks and datasets. Fig. 2 shows the value of the metric p over the epochs in blue, and the decaying threshold described in Eq. (8) in orange. The number of epochs for which training in each precision was performed is shown by the various overlay colours. The first violation is denoted by a red dot and the second violation is not seen as it occurs exactly at the point of switching. The graphs show that across various networks and datasets, the values of p stay relatively similar, backing the choice of a universal decaying factor. Furthermore, empirical results for CIFAR-10 indicated that changing from one fixed-point precision to another too early in the training process had a negative impact on the final validation accuracy. Using a decaying threshold ensures that the value of p needs to be much higher in the initial epochs to trigger a precision change due to the volatility of p in early epochs of training. 4.2 PERFORMANCE EVALUATION The accuracy results presented in this section utilised the proposed stochastic quantisation strategy. The methodology was developed using PyTorch. As the framework does not natively support lowprecision implementations, all quantisation and computations corresponding to 8-, 12-, 14-, and 16- bit precisions were performed through emulation on floating-point hardware. All hyperparameters not specified below were left as PyTorch defaults. For all networks, an SGD optimiser was used with batch sizes 128 on CIFAR-10 or 256 on ImageNet, momentum of 0.9 and weight decay of 1e−4. As a baseline, an FP32 model with identical hyperparameters (except for batch size) was trained. The baseline FP32 training was performed by training for 150 epochs and reducing the learning rate by a factor of 10 at epochs 50 and 100. In order to achieve comparable final validation accuracy to the FP32 baseline, once MuPPET triggered a precision change out of 16-bit fixed-point, 45 training epochs at FP32 precision were performed. The learning rate was reduced by a factor of 10 every 15 FP32 training epochs. For AlexNet, ResNet18, ResNet20, and GoogLeNet, the initial learning rate was set to 0.01, 0.1, 0.1, and 0.001 respectively. The detailed breakdown of the ImageNet training runs with training and validation loss curves can be found in the Appendix A. Table 1 presents the achieved Top-1 validation accuracy of MuPPET and the FP32 baseline, together with the accuracy difference in percentage points (pp). As shown on the table, MuPPET is able to provide comparable Top-1 validation accuracy to standard FP32 training across both networks and datasets. Due to a sub-optimal training setup of GoogLeNet on ImageNet, the baseline and MuPPET training severely underperformed compared to the reported state-of-the-art works. Nevertheless, the results demonstrate the quality of training with MuPPET using identical hyperparameters. As a result, MuPPET’s performance demonstrates the effectiveness of the precision switching strategy in achieving significant acceleration of training time (Section 4.3) at negligible cost in accuracy by running many epochs at lower precision, particularly on very large datasets. 4.3 WALL-CLOCK TIME IMPROVEMENTS This section explores the gains in estimated wall-clock time of the current implementation of MuPPET (Current Impl.) with respect to baseline FP32 training, Mixed Precision by Micikevicius et al. (2018) and MuPPET’s ideal implementation (Table 2). For all performance results, the target platform was an NVIDIA RTX 2080 Ti GPU. At the moment, deep learning frameworks, such as PyTorch, do not provide native support for reduced-precision hardware. Consequently, the wall-clock times in Table 2 were estimated using a performance model developed with NVIDIA’s CUTLASS library (Kerr et al., 2018) for reduced-precision general matrix-multiplication (GEMM) employing the latest Turing architecture GPUs. The GEMMs that were accelerated were in the convolutional and fully-connected layers of each network. INT8 hardware was used to profile the 8-bit fixed-point computations, while FP16 hardware was used to profile 12-, 14-, and 16-bit fixed-point computations as well as Mixed Precision (Micikevicius et al., 2018) wall-clock time. CUTLASS (Kerr et al., 2018) natively implements bit-packing to capitalise on improved memory-bandwidth utilisation. The model for the current implementation is limited by the fact that frameworks force quantisation to happen to and from FP32. For the MuPPET (Ideal) scenario, the model assumes native hardware utilisation which would reduce the overhead by removing this restriction. As shown on Table 2, MuPPET consistently achieves 1.25-1.32× speedup over the FP32 baseline across the networks when targeting ImageNet on the given GPU. With respect to Mixed Precision, the proposed method outperforms it on AlexNet by 1.23× and delivers comparable performance for ResNet18 and GoogLeNet. Currently, the absence of native quantisation support, and hence the necessity to emulate quantisation and the associated overheads, is the limiting factor for MuPPET to achieve higher processing speed. In this respect, MuPPET run on native hardware would yield 1.05× and 1.48× speedup for ResNet18 and GoogLeNet respectively compared to Mixed Precision. As a result, MuPPET demonstrates consistently faster time-to-accuracy (Coleman et al., 2019) compared to Mixed Precision across the benchmarks. Additionally, while Mixed Precision has already reached its limit by using FP16 on FP16-native GPUs, the 8-, 12-, 14- and 16-bit fixed-point computations enabled by MuPPET leave space for further potential speedup when targeting next- and currentgeneration (Fowers et al., 2018) precision-optimised fixed-point platforms. Similar to the analysis in Section 4.2, Micikevicius et al. (2018) and Wang et al. (2018) compare their schemes to baseline FP32 training performed by them. The reported results demonstrate that their methods achieve similar accuracy results to our method by lying close to the respective FP32 training accuracy. As Wang et al. (2018) do not provide any results in terms of gains in wall-clock times and since they use custom FP8 hardware, their work could not be directly compared to our method. 4.4 PRECISION SWITCHING To evaluate the ability of MuPPET to effectively choose an epoch to switch precision at, AlexNet and ResNet20 were first trained using MuPPET on the CIFAR-100 dataset. The hyperparameters for MuPPET were kept the same across all runs. From the results it was noted that training at reduced precision and not switching at all causes a drop in validation accuracy of 1.4% and 1.3% for AlexNet and ResNet20 respectively, hence demonstrating the need to switch precisions when training at bit-widths as low as 8-bit fixed-point. To demonstrate the benefits of a precision switching methodology, two further sets of experiments were conducted on ResNet20 using CIFAR100 as depicted in Fig. 3. First, 34 training runs were performed (34 red dots in Fig. 3), where for each training four epochs along the standard training duration were randomly selected and used as the switching points. Second, the switching strategy MuPPET generated for AlexNet and GoogLeNet was applied to ResNet20 (2 blue dots in Fig. 3). Fig. 3 shows the best test accuracy achieved by each of the runs and the training time as estimated by our performance model described in Sec. 4.3. It shows that for a given time-budget, MuPPET runs (6 green dots) outperform on average all other experiment sets, demonstrating the need for a precision switching policy that is real-time and agnostic to network and dataset in order to achieve a good accuracy-to-training-time trade-off. 5 CONCLUSION This paper proposes MuPPET, a novel low-precision CNN training scheme that combines the use of fixed-point and floating-point representations to produce a network trained for FP32 inference. By introducing a precision-switching mechanism that decides at run time an appropriate transition point between different precision regimes, the proposed framework achieves Top-1 validation accuracies comparable to that achieved by state-of-the-art FP32 training regimes while delivering significant speedup in terms of training time. Quantitative evaluation demonstrates that MuPPET’s training strategy generalises across CNN architectures and datasets by adapting the training process to the target CNN-dataset pair during run time. Overall, MuPPET enables the utilisation of the lowprecision hardware units available on modern specialised processors, such as next-generation GPUs, FPGAs and TPUs, to yield improvements in training time and energy efficiency without impacting the resulting accuracy. Future work will focus on applying the proposed framework to the training of LSTMs, where the training process is more sensitive to gradient quantisation, as well as on the extension of MuPPET to include batch size and learning rate as part of its hyperparameters. Furthermore, we will explore improved quantisation techniques that could enable training convergence for bitwidths even lower than 8-bit fixed-point. A APPENDIX A The graphs in Fig. 4, 5 and 6 demonstrate both the training and validation loss of AlexNet, ResNet18 and GoogLeNet for MuPPET and FP32 runs on the ImageNet dataset. For each graph, the light gray lines indicate the point at which precision was switched in the MuPPET run. The green lines are used to show MuPPET behaviour and blue to show FP32 behaviour. Solid lines show validation loss and dashed lines show training loss. B APPENDIX B This section contains the larger versions of all the figures in the paper for enhanced clarity.
1. What is the main contribution of the paper regarding training strategies? 2. What are the strengths and weaknesses of the proposed Multi-Precision Policy Enforced Training (MUPPET) strategy? 3. Do you have any questions or concerns about the precision switching policy, its notations, and the presentation? 4. How does the proposed strategy compare to state-of-the-art methods in terms of advantages and disadvantages? 5. Are there any minor issues or suggestions for improvement in the paper?
Review
Review Summary: This paper proposes a training strategy called Multi-Precision Policy Enforced Training(MUPPET). This strategy aims to reduce training time by low-precision data representation and computations during the training stage. According to the gradient diversity, the authors introduce a precision-switching mechanism which chooses the best epoch to increase the precision. The validation accuracy and training time across several networks and datasets are shown in the experiments. However, the results are not superior enough compared with the state-of-the-art. My detailed comments are as follows. Positive points: 1. This paper proposes a new reduced-precision training scheme to speed up training by progressively increasing the precision of computations from 8-bit fixed-point to 32-bit floating-point. This scheme moves to reduced-precision fixed-point computations while updating an FP32 model in order to push the boundaries of reduced-precision training. 2. The authors propose a metric to decide when to switch the precision inspired by gradient diversity introduced by [1]. In this paper, the gradient diversity is enhanced by considering gradients across epochs instead of mini-batches. The proposed metric can be seen as a proxy for the amount of new information gained in each training step. Therefore, the metric can decide the most appropriate epoch at run time to increase the precision. 3. The proposed low-precision CNN training scheme is orthogonal and complementary to existing low-precision training techniques. Negative points: 1. The proposed approach does not match the description in this paper. The authors describe “This approach enables the design of a policy that can decide at run time the most appropriate quantization level for the training process”. In fact, this approach just decides which epoch to increase the quantization level while the levels of quantized precisions are fixed, rather than deciding the most appropriate quantization level. 2. The setting of quantized precision levels (8-, 12-, 14- and 16-bit precisions) is confusing. Please illustrate how to choose the number of quantized bit and the number of quantized precision levels. 3. The presentation of the precision switching policy is confusing and the notations are unclear. For example, in section 3.3, the ratio “p” needs more description because it is a key value in the policy, but lacks an explanation in this section. So please explain more about the motivation of ratio “p” in this section. In section 3.3, in step 5 of the proposed precision switching policy, the authors do not explain the meaning of “y”. 4. In figure 2, the precision switch is not triggered even though the value of p violates the threshold more than 2 times, which mismatches the description in section 3.3. 5. The proposed strategy has no obvious advantages. There are some scenes that the proposed strategy does not perform well. For example, the Top-1 validation accuracy on ImageNet of AlexNet and ResNet with MuPPET strategy is much lower than FP32 baseline. Compared with [2], the proposed method is more complex but not superior enough. 6. The authors do not show the training and validation curves. However, the training and validation curves are common used to show more details of the training process, such as in [2] and [3]. Please show and analyze the training and validation curves of the proposed scheme and the baseline. Minor issues: Some spelling and grammar mistakes. Reference: [1] Dong Yin, Ashwin Pananjady, Max Lam, Dimitris Papailiopoulos, Kannan Ramchandran, and Peter Bartlett. Gradient Diversity: a Key Ingredient for Scalable Distributed Learning. In 21st International Conference on Artificial Intelligence and StatiZZstics (AISTATS), pp. 1998–2007, 2018. [2] Paulius Micikevicius, Sharan Narang, Jonah Alben, Gregory Diamos, Erich Elsen, David Garcia, Boris Ginsburg, Michael Houston, Oleksii Kuchaiev, Ganesh Venkatesh, and Hao Wu. Mixed Precision Training. In International Conference on Learning Representations (ICLR), 2018. [3] Suyog Gupta, Ankur Agrawal, Kailash Gopalakrishnan, and Pritish Narayanan. Deep Learning with Limited Numerical Precision. In 32nd International Conference on Machine Learning (ICML), pp. 1737–1746, 2015.
ICLR
Title Multi-Precision Policy Enforced Training (MuPPET) : A precision-switching strategy for quantised fixed-point training of CNNs Abstract Large-scale convolutional neural networks (CNNs) suffer from very long training times, spanning from hours to weeks, limiting the productivity and experimentation of deep learning practitioners. As networks grow in size and complexity, training time is reduced through low-precision data representations and computations. However, in doing so the final accuracy suffers due to the problem of vanishing gradients. Existing state-of-the-art methods combat this issue by means of a mixed-precision approach utilising two different precision levels, FP32 (32-bit floating-point precision) and FP16/FP8 (16-/8-bit floating-point precision), leveraging the hardware support of recent GPU architectures for FP16 operations to obtain performance gains.This work pushes the boundary of quantised training by employing a multilevel optimisation approach that utilises multiple precisions including low-precision fixed-point representations. The novel training strategy, MuPPET, combines the use of multiple number representation regimes together with a precision-switching mechanism that decides at run time the transition point between precision regimes. Overall, the proposed strategy tailors the training process to the hardware-level capabilities of the utilised hardware architecture and yields improvements in training time and energy efficiency compared to state-ofthe-art approaches. Applying MuPPET on the training of AlexNet, ResNet18 and GoogLeNet on ImageNet (ILSVRC12) and targeting an NVIDIA Turing GPU, MuPPET achieves the same accuracy as standard full-precision training with an average training-time speedup of 1.28× across the networks. 1 INTRODUCTION Convolutional neural networks (CNNs) have demonstrated unprecedented accuracy in various machine learning tasks, from video understanding (Gan et al., 2015; He et al., 2018) to drone navigation (Loquercio et al., 2018; Kouris & Bouganis, 2018). To achieve such high levels of accuracy in inherently complex applications, current methodologies employ the design of large and complex CNN models (Szegedy et al., 2017; Huang et al., 2017) trained over large datasets (Deng et al., 2009; Lin et al., 2014). Nevertheless, the combination of large models and massive datasets results in long training times. This in turn leads to long turn-around times which limits the productivity of deep learning practitioners and prohibits wider experimentation. For instance, automatic tuning and search of neural architectures (Cai et al., 2018; Zhong et al., 2018) is a rapidly advancing area where accelerated training enables improving the produced networks. To counteract these long turn-around times, substantial research effort has been invested in hyperparameter tuning for the acceleration of training, with a particular focus on batch size and content. One line of work maximises memory and hardware utilisation by changing the batch size, in order to perform CNN training specific prefetching, scheduling and dependency improvement (Chen et al., 2019; Rhu et al., 2016). Other works focus on altering batch size or reconstructing minibatches to improve the convergence rate while sustaining high hardware utilisation (Devarakonda et al., 2017; Johnson & Guestrin, 2018; Peng et al., 2019), demonstrating up to 6.25× training speedup. A number of studies have focused on the use of reduced-precision training schemes. Reducedprecision arithmetic involves the utilisation of data formats that have smaller wordlengths than the conventional 32-bit floating-point (FP32) representation and is an approach for co-optimising pro- cessing speed, memory footprint and communication overhead. Existing literature can be categorised into works that use reduced precision to accelerate only the training stage while targeting an FP32 model, and those that produce networks with quantised weights. Regarding the former, Courbariaux et al. (2015) and Gupta et al. (2015) utilise dynamic quantisation and stochastic rounding respectively as a means to combat the accuracy loss due to quantisation. Nevertheless, the effectiveness of the proposed schemes have only been demonstrated on smallscale datasets such as CIFAR-10 and MNIST, and on a limited set of networks. Furthermore, the range of quantisation levels that has been explored varies greatly, with a number of works attacking the problem by focusing on mild quantisation levels such as half-precision floating-point (FP16) (Micikevicius et al., 2018), while others focus on lower precisions such as 8-bit floating-point (FP8) (Wang et al., 2018). Finally, quantisation has also been used as a means of reducing the memory and communication overhead in distributed training (De Sa et al., 2015; 2017; Alistarh et al., 2017). At the same time, the characteristics of modern CNN workloads and the trend towards quantised models have led to an emergence of specialised hardware processors, with support for low-precision arithmetic at the hardware level. From custom designs such as Google’s TPUs (Jouppi et al., 2017) and Microsoft’s FPGA-based Brainwave system (Fowers et al., 2018) to commodity devices such as NVIDIA’s Turing GPUs, existing platforms offer native support for reduced-precision data types including 16-bit floating-point (FP16), and 8- (INT8) and 4-bit (INT4) fixed-point, providing increased parallelism for lower bitwidths. Although these platforms have been mainly designed for the inference stage, the low-precision hardware offers significant opportunities for accelerating the time-consuming training stage. In this respect, there is an emerging need to provide training algorithms that can leverage these existing hardware optimisations and provide higher training speed. This work tackles the field of reduced-precision training at an algorithmic level. Independently of the number of quantisation levels chosen, or how extreme the quantisation is, this work proposes a metric that estimates the amount of information each new training step obtains for a given quantisation level, by capturing the diversity of the computed gradients across epochs. This enables the design of a policy that, given a set of quantisation levels, decides at run time appropriate points to increase the precision of the training process at that current instant without impacting the achieved test accuracy compared to training in FP32. Due to its agnostic nature, it remains orthogonal and complementary to existing low-precision training schemes. Furthermore, by pushing the precision below the 16-bit bitwidth of existing state-of-the-art techniques, the proposed method is able to leverage the lowprecision capabilities of modern processing systems to yield training speedups without penalising the resulting accuracy, significantly improving the time-to-accuracy trade-off. 2 BACKGROUND AND RELATED WORK The state-of-the-art method in training in reduced precision is mixed-precision training (Micikevicius et al., 2018). The authors propose to employ low-precision FP16 computations in the training stage of high-precision CNNs that perform inference in FP32. Along the training phase, the algorithm maintains a high-precision FP32 copy of the weights of the network, known as a master copy. At each minibatch, the inputs and weights are quantised to FP16 with all computations of the forward and backward pass performed in FP16, yielding memory footprint and runtime savings. Under this scheme, each stochastic gradient descent (SGD) update step entails accumulating FP16 gradients into the FP32 master copy of the weights, with this process performed iteratively throughout the training of the network. Micikevicius et al. (2018) evaluate their scheme over a set of state-of-theart models on ImageNet, and show that mixed-precision training with FP16 computations achieves comparable accuracy to standard FP32 training. Wang et al. (2018) also presented a method to train an FP32 model using 8-bit floating-point (FP8). The authors propose a hand-crafted FP8 data type, together with a chunk-based computation technique, and employ strategies such as stochastic rounding to alleviate the accuracy loss due to training at reduced precision. For AlexNet, ResNet18 and ResNet50 on ImageNet, Wang et al. (2018) demonstrates comparable accuracy to FP32 training while performing computations in FP8. Additionally the works presented in (Zhou et al., 2016; Chen et al., 2017) approach the problem of reduced-precision training employing fixed-point computations. FxpNet (Chen et al., 2017) was only evaluated on CIFAR-10, failing to demonstrate performance on more complex datasets such as ImageNet. DoReFa-net (Zhou et al., 2016) was tested on ImageNet but only ran on AlexNet missing out on state-of-the-art networks such as GoogLeNet and ResNet. All related works focus on accelerating the training of an FP32 model through reduced-precision computations. At the hardware level, 8-bit fixed-point multiplication uses 18.5× less energy and 27.5× less area with up to 4× lower multiplication times than FP32 (Sze et al., 2017). Consequently, this work attempts to push the boundaries of reduced-precision training by moving to reducedprecision fixed-point computations while updating an FP32 model. Preliminary tests (Sec. 4.4 for details) demonstrated that training solely in 8-bit fixed-point results in a significant degradation of validation accuracy compared to full FP32 training. This work aims to counteract this degradation by progressively increasing the precision of computations throughout training in an online manner determined by the proposed metric inspired by gradient diversity (Yin et al., 2018). Additionally by operating in an online fashion, MuPPET tailors the training process to best suit the particular network-dataset pair at each stage of the training process. Gradient diversity was introduced by Yin et al. (2018) as a metric of measuring the dissimilarity between sets of gradients that correspond to different minibatches. The gradient diversity of a set of gradients is defined as ∆S(w) = ∑n i=1 ||∇fi(w)||22 || ∑n i=1∇fi(w)||22 = ∑n i=1 ||∇fi(w)||22∑n i=1 ||∇fi(w)||22 + ∑ i 6=j〈∇fi(w),∇fj(w)〉 (1) where∇fi(w) represents the gradient of weights w for minibatch i. The key point to note in Eq. (1) is that the denominator contains the inner product between two gradients from different minibatches. Thus, orthogonal gradients would result in high gradient diversity, while similar gradients would result in low gradient diversity. The proposed framework, MuPPET, enhances this concept by considering gradients between minibatches across epochs and proposes the developed metric as a proxy for the amount of new information gained in each training step. Section 3 further expands on how gradient diversity is incorporated into the MuPPET algorithm. 3 METHODOLOGY 3.1 MULTILEVEL OPTIMISATION FOR TRAINING CNNS Conventionally, the training process of a CNN can be expressed as in Eq. (2). Given a CNN model f parameterised by a set of weights w ∈ RD, whereD is the number of weights of f , training involves a search for weight values that minimise the task-specific empirical loss, Loss, on the target dataset. Typically, a fixed arithmetic precision is employed across the training algorithm with FP32 currently being the de facto representation used by the deep learning community. min w(FP32)∈RD Loss(f(w(FP32))) (2) The proposed method follows a different approach by introducing a multilevel optimisation scheme (Migdalas et al., 2013) that leverages the performance gains of reduced-precision arithmetic. The single optimisation problem of Eq. (2) is transformed into a series of optimisation problems with each one employing different precision for computations, but maintaining weights storage at FP32 precision. Under this scheme, an N -level formulation comprises N sequential optimisation problems to be solved, with each level corresponding to a “finer” model. Overall, this formulation adds a hierarchical structure to the training stage, with an increasing arithmetic precision across the hierarchy of optimisation problems. Starting from the N -th problem, the inputs, weights, and activations of the CNN model f are quantised with precision qN , which is the lowest precision in the system and represents the coarsest version of the model. Each of the N levels progressively employs higher precision until the first level is reached, which corresponds to the original problem of Eq. (2). Formally, at the i-th level, the optimisation problem is formulated as min w(qi)∈V Loss(f(w(q i))) s.t. V = { w(q i) ∈ [LB,UB]D } (3) where LB and UB are the lower and upper bound in the representational range of precision qi. The target CNN model f uses a set of weights quantised with precision qi and hence the solution of this optimisation problem can be interpreted as an approximation to the original problem of Eq. (2). To transition from one level to the next, the result of each level of optimisation is employed as a starting point for the next level, up to the final outermost optimisation that reduces to Eq. (2). 3.2 THE MUPPET ALGORITHM Fig. 1 presents the process of training a CNN using the proposed algorithm. All figures in this paper are shown in the Appendix B at a larger scale for enhanced readability. Within each epoch, MuPPET performs mixed-precision training where the weights are stored in an FP32 master copy and are quantised to the desired fixed-point precision on-the-fly. At epoch j, the computations for the forward and backward passes (F and B blocks respectively) are performed at the current quantised precision (qj) and the ac- tivations as well as the gradients obtained from each layer are quantised by the quantiser module before being passed on to the next layer, or stored. After each minibatch, the full-precision master copy of the weights is updated using a quantised gradient matrix. As discussed in Section 3.1, the quantisation level is gradually increased over the period of the training. In MuPPET, switching between these optimisation levels at the correct times is crucial in order not to compromise the final validation accuracy. In this respect, MuPPET introduces a precision switching policy based on an inter-epoch gradient diversity (Yin et al., 2018) metric that dictates when to switch to the next precision. Details of the switching policy are presented in Section 3.3. 3.2.1 QUANTISATION STRATEGY In order to implement quantised training, a quantisation strategy needs to be defined. The proposed dynamic quantisation strategy utilises block floating-point arithmetic (also known as dynamic fixedpoint), where each fixed-point number is represented as a pair of an WLnet-bit signed integer x and a scale factor s, such that the value is represented as x× 2−s. During the forward and backward passes of the training process, the weights and feature maps are both quantised, and the multiplication operations are performed at the same low precision. The quantisation method employs a stochastic rounding methodology (Gupta et al., 2015). The accumulation stage of the matrix-multiply operation is accumulated into a 32-bit fixed-point value to prevent overflow on the targeted networks.1 The result of this matrix multiplication is subsequently quantised to the target wordlength before being passed as input to the next layer. Following the block floating-point scheme, quantisation is performed such that each weight and feature map matrix in the network has a single scale factor shared by all values within the matrix. The quantisation configuration for the i-th level of optimisation and the l-th layer, qil , and the full set of configurations, qi, are given by left- and right-hand side of Eq. (4) respectively. qil = 〈 WLnet, sweightsl , s act l 〉i , ∀l ∈ [1, |L|] and qi = 〈 qil | ∀l ∈ [1, |L|] 〉 (4) where |L| is the number of layers of the target network, WLnet is the fixed wordlength across the network, sweightsl and s act l are the scaling factors for the weights and activations respectively, of the lth layer for the i-th level of optimisation. As a result, for N levels, there are N distinct quantisation schemes; N − 1 of these schemes are with varying fixed-point precisions, and the finest level of quantisation, q1, is single-precision floating-point (FP32). The scaling factor for a matrix X is first calculated as shown in Eq. (5) and individual elements are quantised as in Eq. (6). s{weights, act} = ⌊ log2 ( min ( UB + 0.5 X {weights, act} max , LB− 0.5 X {weights, act} min ))⌋ (5) x {weights, act} quant = ⌊ x{weights, act} · 2s {weights, act} + Unif (−0.5, 0.5) ⌉ (6) 1The accumulator wordlength is large enough to accommodate the current CNN models, without overflow. where X{weights, act}{max, min} is either the maximum or minimum value in the weights or feature maps matrix of the current layer, LB and UB are the lower and upper bound of the current wordlength WLnet, and Unif(a,b) represents sampling from the uniform distribution in the range [a,b]. Eq. (5) adds 0.5 and −0.5 to UB and LB respectively to ensure maximum utilisation of WLnet. 3.2.2 INFORMATION TRANSFER BETWEEN LEVELS Employing multilevel training for CNNs requires an appropriate mechanism for transferring information between levels. To achieve this, the proposed optimiser maintains a master copy of the weights in full precision (FP32) throughout the optimisation levels. Similar to mixed-precision training (Micikevicius et al., 2018), at each level the SGD update step is performed by accumulating a fixed-point gradient value into the FP32 master copy of the weights. Starting from the coarsest quantisation level i = N , to transfer the solution from level i to level i − 1, the master copy is quantised using the quantisation scheme qi−1. With this approach, the weights are maintained in FP32 and are quantised on-the-fly during run time in order to be utilised in each training step. 3.3 PRECISION SWITCHING POLICY The metric to decide when to switch between levels of quantisation is inspired by Yin et al. (2018) and based on the concept of gradient diversity (Eq. (1)). MuPPET computes ∆S(w) between gradients obtained across epochs as a proxy to measure the information that is obtained during the training process; the lower the diversity between the gradients, the less information this level of quantisation provides towards the training of the model. Therefore, the proposed method comprises a novel normalised inter-epoch version of the gradient diversity along with a run-time policy to determine the epochs to switch precision. The following policy is employed to determine when a precision switch is to be performed. For a network with layers L and a quantisation scheme qi that was switched into at epoch e: 1. For each epoch j and each layer l ∈ L, the last minibatch’s gradient,∇f jl (w), is stored. 2. After r (resolution) number of epochs, the inter-epoch gradient diversity at epoch j is ∆S(w)j = ∑ ∀l∈L ∑j k=j−r ||∇f k l (w)|| 2 2 || ∑j k=j−r ∇f k l (w)|| 2 2 |L| (7) 3. At an epoch j, given a set of gradient diversities S(j) = { ∆S(w)i ∀ e ≤ i < j } , the ratio p = maxS(j)∆S(w)j is calculated. 4. An empirically determined decaying threshold T = α+βe−λj (8) is placed on the ratio p. 5. If the p violates T more than γ times, a precision switch is triggered and S(j) = ∅. As long as the gradients across epochs remain diverse, ∆S(w)j (Eq.(7)) at the denominator of p sustains a high value and the value of p remains low. However, when the gradients across epochs become similar, ∆S(w)j decreases and the value of p becomes larger. Generalisability across epochs is obtained as p accounts for the change in information relative to the maximum information available since the last precision change. Hence, the metric acknowledges the presence of temporal variations in information provided by the gradients. Generalisability across networks and datasets is maintained as p measures a ratio. Consequently, the absolute values of gradients which could vary between networks and datasets, matter less. Overall, MuPPET employs the metric p as a mechanism to trigger a precision switch whenever p violates threshold T more than γ times. The likelihood of observing r gradients across r epochs that have low gradient diversity, especially at early stages of training is low. The intuition applied here is that when this does happen at a given precision, it may be an indication that information is being lost due to quantisation and thus corresponds to a high p value, which argues to move to a higher bitwidth. 3.3.1 HYPERPARAMETERS The hyperparameters for the proposed MuPPET algorithm are the following: 1) values of α, β, and λ that define the decaying threshold from Eq. (8), 2) the number of threshold violations allowed before the precision change is triggered (γ), 3) the resolution r, 4) the set of precisions at which training is performed, and 5) the epochs at which the learning rate is changed. The values of α, β, λ, r, and γ were set at 1, 1.5, 0.1, 3, and 2 respectively after empirical cross-validation. These were tuned by running training on AlexNet and ResNet20 on the CIFAR-10 dataset. All MuPPET hyperparameters remain the same regardless of network or dataset. Regarding training hyperparameters, batch size was increased from 128 to 256 going from CIFAR-10 to ImageNet. All other training hyperparameters, including learning rate remained constant. Analysis of generalisability and the training hyperparameters used are presented in Section 4.1. The empirically-chosen quantised precisions at which training was performed were 8-, 12-, 14- and 16-bit fixed-point. Precisions below this did not result in any progress towards convergence for any network. Overall, MuPPET introduces a policy that allows to decide at run time an appropriate point to switch between quantisation levels. After training at 16-bit fixed-point, the rest of the training is performed at FP32 until the desired validation accuracy is reached. Decaying the learning rate causes a finer exploration of the optimisation space as does increasing the quantisation level. Therefore, the learning rate was kept constant during quantised training and was decayed only after switching to FP32. 4 EVALUATION OF MUPPET 4.1 GENERALISABILITY The MuPPET framework was evaluated on its applicability across epochs, networks and datasets. Fig. 2 shows the value of the metric p over the epochs in blue, and the decaying threshold described in Eq. (8) in orange. The number of epochs for which training in each precision was performed is shown by the various overlay colours. The first violation is denoted by a red dot and the second violation is not seen as it occurs exactly at the point of switching. The graphs show that across various networks and datasets, the values of p stay relatively similar, backing the choice of a universal decaying factor. Furthermore, empirical results for CIFAR-10 indicated that changing from one fixed-point precision to another too early in the training process had a negative impact on the final validation accuracy. Using a decaying threshold ensures that the value of p needs to be much higher in the initial epochs to trigger a precision change due to the volatility of p in early epochs of training. 4.2 PERFORMANCE EVALUATION The accuracy results presented in this section utilised the proposed stochastic quantisation strategy. The methodology was developed using PyTorch. As the framework does not natively support lowprecision implementations, all quantisation and computations corresponding to 8-, 12-, 14-, and 16- bit precisions were performed through emulation on floating-point hardware. All hyperparameters not specified below were left as PyTorch defaults. For all networks, an SGD optimiser was used with batch sizes 128 on CIFAR-10 or 256 on ImageNet, momentum of 0.9 and weight decay of 1e−4. As a baseline, an FP32 model with identical hyperparameters (except for batch size) was trained. The baseline FP32 training was performed by training for 150 epochs and reducing the learning rate by a factor of 10 at epochs 50 and 100. In order to achieve comparable final validation accuracy to the FP32 baseline, once MuPPET triggered a precision change out of 16-bit fixed-point, 45 training epochs at FP32 precision were performed. The learning rate was reduced by a factor of 10 every 15 FP32 training epochs. For AlexNet, ResNet18, ResNet20, and GoogLeNet, the initial learning rate was set to 0.01, 0.1, 0.1, and 0.001 respectively. The detailed breakdown of the ImageNet training runs with training and validation loss curves can be found in the Appendix A. Table 1 presents the achieved Top-1 validation accuracy of MuPPET and the FP32 baseline, together with the accuracy difference in percentage points (pp). As shown on the table, MuPPET is able to provide comparable Top-1 validation accuracy to standard FP32 training across both networks and datasets. Due to a sub-optimal training setup of GoogLeNet on ImageNet, the baseline and MuPPET training severely underperformed compared to the reported state-of-the-art works. Nevertheless, the results demonstrate the quality of training with MuPPET using identical hyperparameters. As a result, MuPPET’s performance demonstrates the effectiveness of the precision switching strategy in achieving significant acceleration of training time (Section 4.3) at negligible cost in accuracy by running many epochs at lower precision, particularly on very large datasets. 4.3 WALL-CLOCK TIME IMPROVEMENTS This section explores the gains in estimated wall-clock time of the current implementation of MuPPET (Current Impl.) with respect to baseline FP32 training, Mixed Precision by Micikevicius et al. (2018) and MuPPET’s ideal implementation (Table 2). For all performance results, the target platform was an NVIDIA RTX 2080 Ti GPU. At the moment, deep learning frameworks, such as PyTorch, do not provide native support for reduced-precision hardware. Consequently, the wall-clock times in Table 2 were estimated using a performance model developed with NVIDIA’s CUTLASS library (Kerr et al., 2018) for reduced-precision general matrix-multiplication (GEMM) employing the latest Turing architecture GPUs. The GEMMs that were accelerated were in the convolutional and fully-connected layers of each network. INT8 hardware was used to profile the 8-bit fixed-point computations, while FP16 hardware was used to profile 12-, 14-, and 16-bit fixed-point computations as well as Mixed Precision (Micikevicius et al., 2018) wall-clock time. CUTLASS (Kerr et al., 2018) natively implements bit-packing to capitalise on improved memory-bandwidth utilisation. The model for the current implementation is limited by the fact that frameworks force quantisation to happen to and from FP32. For the MuPPET (Ideal) scenario, the model assumes native hardware utilisation which would reduce the overhead by removing this restriction. As shown on Table 2, MuPPET consistently achieves 1.25-1.32× speedup over the FP32 baseline across the networks when targeting ImageNet on the given GPU. With respect to Mixed Precision, the proposed method outperforms it on AlexNet by 1.23× and delivers comparable performance for ResNet18 and GoogLeNet. Currently, the absence of native quantisation support, and hence the necessity to emulate quantisation and the associated overheads, is the limiting factor for MuPPET to achieve higher processing speed. In this respect, MuPPET run on native hardware would yield 1.05× and 1.48× speedup for ResNet18 and GoogLeNet respectively compared to Mixed Precision. As a result, MuPPET demonstrates consistently faster time-to-accuracy (Coleman et al., 2019) compared to Mixed Precision across the benchmarks. Additionally, while Mixed Precision has already reached its limit by using FP16 on FP16-native GPUs, the 8-, 12-, 14- and 16-bit fixed-point computations enabled by MuPPET leave space for further potential speedup when targeting next- and currentgeneration (Fowers et al., 2018) precision-optimised fixed-point platforms. Similar to the analysis in Section 4.2, Micikevicius et al. (2018) and Wang et al. (2018) compare their schemes to baseline FP32 training performed by them. The reported results demonstrate that their methods achieve similar accuracy results to our method by lying close to the respective FP32 training accuracy. As Wang et al. (2018) do not provide any results in terms of gains in wall-clock times and since they use custom FP8 hardware, their work could not be directly compared to our method. 4.4 PRECISION SWITCHING To evaluate the ability of MuPPET to effectively choose an epoch to switch precision at, AlexNet and ResNet20 were first trained using MuPPET on the CIFAR-100 dataset. The hyperparameters for MuPPET were kept the same across all runs. From the results it was noted that training at reduced precision and not switching at all causes a drop in validation accuracy of 1.4% and 1.3% for AlexNet and ResNet20 respectively, hence demonstrating the need to switch precisions when training at bit-widths as low as 8-bit fixed-point. To demonstrate the benefits of a precision switching methodology, two further sets of experiments were conducted on ResNet20 using CIFAR100 as depicted in Fig. 3. First, 34 training runs were performed (34 red dots in Fig. 3), where for each training four epochs along the standard training duration were randomly selected and used as the switching points. Second, the switching strategy MuPPET generated for AlexNet and GoogLeNet was applied to ResNet20 (2 blue dots in Fig. 3). Fig. 3 shows the best test accuracy achieved by each of the runs and the training time as estimated by our performance model described in Sec. 4.3. It shows that for a given time-budget, MuPPET runs (6 green dots) outperform on average all other experiment sets, demonstrating the need for a precision switching policy that is real-time and agnostic to network and dataset in order to achieve a good accuracy-to-training-time trade-off. 5 CONCLUSION This paper proposes MuPPET, a novel low-precision CNN training scheme that combines the use of fixed-point and floating-point representations to produce a network trained for FP32 inference. By introducing a precision-switching mechanism that decides at run time an appropriate transition point between different precision regimes, the proposed framework achieves Top-1 validation accuracies comparable to that achieved by state-of-the-art FP32 training regimes while delivering significant speedup in terms of training time. Quantitative evaluation demonstrates that MuPPET’s training strategy generalises across CNN architectures and datasets by adapting the training process to the target CNN-dataset pair during run time. Overall, MuPPET enables the utilisation of the lowprecision hardware units available on modern specialised processors, such as next-generation GPUs, FPGAs and TPUs, to yield improvements in training time and energy efficiency without impacting the resulting accuracy. Future work will focus on applying the proposed framework to the training of LSTMs, where the training process is more sensitive to gradient quantisation, as well as on the extension of MuPPET to include batch size and learning rate as part of its hyperparameters. Furthermore, we will explore improved quantisation techniques that could enable training convergence for bitwidths even lower than 8-bit fixed-point. A APPENDIX A The graphs in Fig. 4, 5 and 6 demonstrate both the training and validation loss of AlexNet, ResNet18 and GoogLeNet for MuPPET and FP32 runs on the ImageNet dataset. For each graph, the light gray lines indicate the point at which precision was switched in the MuPPET run. The green lines are used to show MuPPET behaviour and blue to show FP32 behaviour. Solid lines show validation loss and dashed lines show training loss. B APPENDIX B This section contains the larger versions of all the figures in the paper for enhanced clarity.
1. What is the reviewer's overall opinion of the paper? 2. What does the reviewer find unclear or heuristic in the paper's presentation? 3. What is the reviewer's concern regarding the motivation behind the switching mechanism? 4. How does the reviewer feel about the explanation of the algorithm's details? 5. Are there any typos or errors in the paper that the reviewer noticed? 6. Does the reviewer have any issues with the justification for using AlexNet?
Review
Review Overall an interesting paper, though I wished a more detailed presentation of the reasoning behind the algorithm would have been provided. As it stands it feels a bit heuristic. In particular I don't understand the motivation between the switching mechanism. Basically it says if the gradients are co-aligned between epochs it means there is not much to learn anymore!? Why? Intuitively if the gradients would go to 0 or become very small maybe you would want to increase precision. Or if you have high variance you could argue that the expected gradient would be 0 and hence you are not really making progress, i.e. you are just moving left-right. But if all gradients agree on a moving direction, why is that a bad thing? I know the heuristic is borrowed from a different work, but since it feels as such an integral part of MuPPET I think you should explain it better. I guess a few details about the algorithm as well. When you say you look at the diversity of the gradients over the epochs, is this the batch gradient !? There are some small typos (e.g. FP23 instead FP32). I find the justification for AlexNet to be adhoc (it switched at the wrong time, but that allowed to take more advantage of computation in the low precision hence it was faster). The switching mechanism should only care of when the gradients are not informative anymore, not how much compute you are wasting .
ICLR
Title A Probabilistic Formulation of Unsupervised Text Style Transfer Abstract We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.1 N/A We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.1 1 INTRODUCTION Text sequence transduction systems convert a given text sequence from one domain to another. These techniques can be applied to a wide range of natural language processing applications such as machine translation (Bahdanau et al., 2015), summarization (Rush et al., 2015), and dialogue response generation (Zhao et al., 2017). In many cases, however, parallel corpora for the task at hand are scarce. Therefore, unsupervised sequence transduction methods that require only non-parallel data are appealing and have been receiving growing attention (Bannard & Callison-Burch, 2005; Ravi & Knight, 2011; Mizukami et al., 2015; Shen et al., 2017; Lample et al., 2018; 2019). This trend is most pronounced in the space of text style transfer tasks where parallel data is particularly challenging to obtain (Hu et al., 2017; Shen et al., 2017; Yang et al., 2018). Style transfer has historically referred to sequence transduction problems that modify superficial properties of text – i.e. style rather than content.2 We focus on a standard suite of style transfer tasks, including formality transfer (Rao & Tetreault, 2018), author imitation (Xu et al., 2012), word decipherment (Shen et al., 2017), sentiment transfer (Shen et al., 2017), and related language translation (Pourdamghani & Knight, 2017). General unsupervised translation has not typically been considered style transfer, but for the purpose of comparison we also conduct evaluation on this task (Lample et al., 2017). ∗Equal Contribution. 1Code and data are available at https://github.com/cindyxinyiwang/deep-latent-sequence-model. 2Notably, some tasks we evaluate on do change content to some degree, such as sentiment transfer, but for conciseness we use the term “style transfer” nonetheless. ar X iv :2 00 2. 03 91 2v 1 [ cs .C L ] 1 0 Fe b 20 20 Recent work on unsupervised text style transfer mostly employs non-generative or non-probabilistic modeling approaches. For example, Shen et al. (2017) and Yang et al. (2018) design adversarial discriminators to shape their unsupervised objective – an approach that can be effective, but often introduces training instability. Other work focuses on directly designing unsupervised training objectives by incorporating intuitive loss terms (e.g. backtranslation loss), and demonstrates state-ofthe-art performance on unsupervised machine translation (Lample et al., 2018; Artetxe et al., 2019) and style transfer (Lample et al., 2019). However, the space of possible unsupervised objectives is extremely large and the underlying modeling assumptions defined by each objective can only be reasoned about indirectly. As a result, the process of designing such systems is often heuristic. In contrast, probabilistic models (e.g. the noisy channel model (Shannon, 1948)) define assumptions about data more explicitly and allow us to reason about these assumptions during system design. Further, the corresponding objectives are determined naturally by principles of probabilistic inference, reducing the need for empirical search directly in the space of possible objectives. That said, classical probabilistic models for unsupervised sequence transduction (e.g. the HMM or semi-HMM) typically enforce overly strong independence assumptions about data to make exact inference tractable (Knight et al., 2006; Ravi & Knight, 2011; Pourdamghani & Knight, 2017). This has restricted their development and caused their performance to lag behind unsupervised neural objectives on complex tasks. Luckily, in recent years, powerful variational approximation techniques have made it more practical to train probabilistic models without strong independence assumptions (Miao & Blunsom, 2016; Yin et al., 2018). Inspired by this, we take a new approach to unsupervised style transfer. We directly define a generative probabilistic model that treats a non-parallel corpus in two domains as a partially observed parallel corpus. Our model makes few independence assumptions and its true posterior is intractable. However, we show that by using amortized variational inference (Kingma & Welling, 2013), a principled probabilistic technique, a natural unsupervised objective falls out of our modeling approach that has many connections with past work, yet is different from all past work in specific ways. In experiments across a suite of unsupervised text style transfer tasks, we find that the natural objective of our model actually outperforms all manually defined unsupervised objectives from past work, supporting the notion that probabilistic principles can be a useful guide even in deep neural systems. Further, in the case of unsupervised machine translation, our model matches the current state-of-the-art non-generative approach. 2 UNSUPERVISED TEXT STYLE TRANSFER We first overview text style transfer, which aims to transfer a text (typically a single sentence or a short paragraph – for simplicity we refer to simply “sentences” below) from one domain to another while preserving underlying content. For example, formality transfer (Rao & Tetreault, 2018) is the task of transforming the tone of text from informal to formal without changing its content. Other examples include sentiment transfer (Shen et al., 2017), word decipherment (Knight et al., 2006), and author imitation (Xu et al., 2012). If parallel examples were available from each domain (i.e. the training data is a bitext consisting of pairs of sentences from each domain), supervised techniques could be used to perform style transfer (e.g. attentional Seq2Seq (Bahdanau et al., 2015) and Transformer (Vaswani et al., 2017)). However, for most style transfer problems, only non-parallel corpora (one corpus from each domain) can be easily collected. Thus, work on style transfer typically focuses on the more difficult unsupervised setting where systems must learn from non-parallel data alone. The model we propose treats an observed non-parallel text corpus as a partially observed parallel corpus. Thus, we introduce notation for both observed text inputs and those that we will treat as latent variables. Specifically, we let X = {x(1), x(2), · · · , x(m)} represent observed data from domain D1, while we let Y = {y(m+1), y(m+2), · · · , y(n)} represent observed data from domain D2. Corresponding indices represent parallel sentences. Thus, none of the observed sentences share indices. In our model, we introduce latent sentences to complete the parallel corpus. Specifically, X̄ = {x̄(m+1), x̄(m+2), · · · , x̄(n)} represents the set of latent parallel sentences in D1, while Ȳ = {ȳ(1), ȳ(2), · · · , ȳ(m)} represents the set of latent parallel sentences in D2. Then the goal of unsupervised text transduction is to infer these latent variables conditioned the observed non-parallel corpora; that is, to learn p(ȳ|x) and p(x̄|y). 3 THE DEEP LATENT SEQUENCE MODEL First we present our generative model of bitext, which we refer to as a deep latent sequence model. We then describe unsupervised learning and inference techniques for this model class. 3.1 MODEL STRUCTURE Directly modeling p(ȳ|x) and p(x̄|y) in the unsupervised setting is difficult because we never directly observe parallel data. Instead, we propose a generative model of the complete data that defines a joint likelihood, p(X, X̄, Y, Ȳ ). In order to perform text transduction, the unobserved halves can be treated as latent variables: they will be marginalized out during learning and inferred via posterior inference at test time. Our model assumes that each observed sentence is generated from an unobserved parallel sentence in the opposite domain, as depicted in Figure 1. Specifically, each sentence x(i) in domain D1 is generated as follows: First, a latent sentence ȳ(i) in domain D2 is sampled from a prior, pD2(ȳ(i)). Then, x(i) is sampled conditioned on ȳ(i) from a transduction model, p(x(i)|ȳ(i)). Similarly, each observed sentence y(j) in domain D2 is generated conditioned on a latent sentence, x̄(j), in domain D1 via the opposite transduction model, p(y(j)|x̄(j)), and prior, pD1(x̄(j)). We let θx|ȳ and θy|x̄ represent the parameters of the two transduction distributions respectively. We assume the prior distributions are pretrained on the observed data in their respective domains and therefore omit their parameters for simplicity of notation. Together, this gives the following joint likelihood: p(X, X̄, Y, Ȳ ; θx|ȳ, θy|x̄) = ( m∏ i=1 p ( x(i)|ȳ(i); θx|ȳ ) pD2 ( ȳ(i) ))( n∏ j=m+1 p ( y(j)|x̄(j); θy|x̄ ) pD1 ( x̄(j) )) (1) The log marginal likelihood of the data, which we will approximate during training, is: log p(X,Y ; θx|ȳ, θy|x̄) = log ∑ X̄ ∑ Ȳ p(X, X̄, Y, Ȳ ; θx|ȳ, θy|x̄) (2) Note that if the two transduction models share no parameters, the training problems for each observed domain are independent. Critically, we introduce parameter sharing through our variational inference procedure, which we describe in more detail in Section 3.2. Architecture: Since we would like to be able to model a variety of transfer tasks, we choose a parameterization for our transduction distributions that makes no independence assumptions. Specifically, we employ an encoder-decoder architecture based on the standard attentional Seq2Seq model which has been shown to be successful across various tasks (Bahdanau et al., 2015; Rush et al., 2015). Similarly, our prior distributions for each domain are parameterized as recurrent language models which, again, make no independence assumptions. In contrast, traditional unsupervised generative sequence models typically make strong independence assumptions to enable exact inference (e.g. the HMM makes a Markov assumption on the latent sequence and emissions are one-to-one). Our model is more flexible, but exact inference via dynamic programming will be intractable. We address this problem in the next section. 3.2 LEARNING Ideally, learning should directly optimize the log data likelihood, which is the marginal of our model shown in Eq. 2. However, due to our model’s neural parameterization which does not factorize, computing the data likelihood cannot be accomplished using dynamic programming as can be done with simpler models like the HMM. To overcome the intractability of computing the true data likelihood, we adopt amortized variational inference (Kingma & Welling, 2013) in order to derive a surrogate objective for learning, the evidence lower bound (ELBO) on log marginal likelihood3 : log p(X,Y ; θx|ȳ, θy|x̄) ≥ LELBO(X,Y ; θx|ȳ, θy|x̄, φx̄|y, φȳ|x) = ∑ i [ Eq(ȳ|x(i);φȳ|x)[log p(x (i)|ȳ; θx|ȳ)]−DKL ( q(ȳ|x(i);φȳ|x)||pD2(ȳ) )] + ∑ j [ Eq(x̄|y(j);φx̄|y)[log p(y (j)|x̄; θy|x̄)]︸ ︷︷ ︸ Reconstruction likelihood −DKL ( q(x̄|y(j);φx̄|y)||pD1(x̄) )] ︸ ︷︷ ︸ KL regularizer (3) The surrogate objective introduces q(ȳ|x(i);φȳ|x) and q(x̄|y(j);φx̄|y), which represent two separate inference network distributions that approximate the model’s true posteriors, p(ȳ|x(i); θx|ȳ) and p(x̄|y(j); θy|x̄), respectively. Learning operates by jointly optimizing the lower bound over both variational and model parameters. Once trained, the variational posterior distributions can be used directly for style transfer. The KL terms in Eq. 3, that appear naturally in the ELBO objective, can be intuitively viewed as regularizers that use the language model priors to bias the induced sentences towards the desired domains. Amortized variational techniques have been most commonly applied to continuous latent variables, as in the case of the variational autoencoder (VAE) (Kingma & Welling, 2013). Here, we use this approach for inference over discrete sequences, which has been shown to be effective in related work on a semi-supervised task (Miao & Blunsom, 2016). Inference Network and Parameter Sharing: Note that the approximate posterior on one domain aims to learn the reverse style transfer distribution, which is exactly the goal of the generative distribution in the opposite domain. For example, the inference network q(ȳ|x(i);φȳ|x) and the generative distribution p(y|x̄(i); θy|x̄) both aim to transform D1 to D2. Therefore, we use the same architecture for each inference network as used in the transduction models, and tie their parameters: φx̄|y = θx|ȳ, φȳ|x = θy|x̄. This means we learn only two encoder-decoders overall – which are parameterized by θx|ȳ and θy|x̄ respectively – to represent two directions of transfer. In addition to reducing the number of learnable parameters, this parameter tying couples the learning problems for both domains and allows us to jointly learn from the full data. Moreover, inspired by recent work that 3Note that in practice, we add a weight λ (the same to both domains) to the KL term in ELBO since the regularization strength from the pretrained language model varies depending on the datasets, training data size, or language model structures. Such reweighting has proven necessary in previous work that is trained with ELBO (Bowman et al., 2016; Miao & Blunsom, 2016; Yin et al., 2018). builds a universal Seq2Seq model to translate between different language pairs (Johnson et al., 2017), we introduce further parameter tying between the two directions of transduction: the same encoder is employed for both x and y, and a domain embedding c is provided to the same decoder to specify the transfer direction, as shown in Figure 2. Ablation analysis in Section 5.3 suggests that parameter sharing is important to achieve good performance. Approximating Gradients of ELBO: The reconstruction and KL terms in Eq. 3 still involve intractable expectations due to the marginalization over the latent sequence, thus we need to approximate their gradients. Gumbel-softmax (Jang et al., 2017) and REINFORCE (Sutton et al., 2000) are often used as stochastic gradient estimators in the discrete case. Since the latent text variables have an extremely large domain, we find that REINFORCE-based gradient estimates result in high variance. Thus, we use the Gumbel-softmax straight-through estimator to backpropagate gradients from the KL terms.4 However, we find that approximating gradients of the reconstruction loss is much more challenging – both the Gumbel-softmax estimator and REINFORCE are unable to outperform a simple stop-gradient method that does not back-propagate the gradient of the latent sequence to the inference network. This confirms a similar observation in previous work on unsupervised machine translation (Lample et al., 2018). Therefore, we use greedy decoding without recording gradients to approximate the reconstruction term.5 Note that the inference networks still receive gradients from the prior through the KL term, and their parameters are shared with the decoders which do receive gradients from reconstruction. We consider this to be the best empirical compromise at the moment. Initialization. Good initialization is often necessary for successful optimization of unsupervised learning objectives. In preliminary experiments, we find that the encoder-decoder structure has difficulty generating realistic sentences during the initial stages of training, which usually results in a disastrous local optimum. This is mainly because the encoder-decoder is initialized randomly and there is no direct training signal to specify the desired latent sequence in the unsupervised setting. Therefore, we apply a self-reconstruction loss Lrec at the initial epochs of training. We denote the output the encoder as e(·) and the decoder distribution as pdec, then Lrec = −α · ∑ i [pdec(e(x (i), cx)]− α · ∑ j [pdec(e(y (j), cy)], (4) α decays from 1.0 to 0.0 linearly in the first k epochs. k is a tunable parameter and usually less than 3 in all our experiments. 4 CONNECTION TO RELATED WORK Our probabilistic formulation can be connected with recent advances in unsupervised text transduction methods. For example, back translation loss (Sennrich et al., 2016) plays an important role in recent unsupervised machine translation (Artetxe et al., 2018; Lample et al., 2018; Artetxe et al., 2019) and unsupervised style transfer systems (Lample et al., 2019). In order to incorporate back translation loss the source language x is translated to the target language y to form a pseudo-parallel corpus, then a translation model from y to x can be learned on this pseudo bitext just as in supervised setting. While back translation was often explained as a data augmentation technique, in our probabilistic formulation it appears naturally with the ELBO objective as the reconstruction loss term. Some previous work has incorporated a pretrained language models into neural semi-supervised or unsupervised objectives. He et al. (2016) uses the log likelihood of a pretrained language model as the reward to update a supervised machine translation system with policy gradient. Artetxe et al. (2019) utilize a similar idea for unsupervised machine translation. Yang et al. (2018) employed a similar approach, but interpret the LM as an adversary, training the generator to fool the LM. We show how our ELBO objective is connected with these more heuristic LM regularizers by expanding the KL loss term (assume x is observed): DKL(q(ȳ|x)||pD2(ȳ)) = −Hq − Eq[log pD2(ȳ)], (5) Note that the loss used in previous work does not include the negative entropy term, −Hq. Our objective results in this additional “regularizer”, the negative entropy of the transduction distribution, −Hq. Intuitively, −Hq helps avoid a peaked transduction distribution, preventing the transduction 4We use one sample to approximate the expectations. 5We compare greedy and sampling decoding in Section 5.3. from constantly generating similar sentences to satisfy the language model. In experiments we will show that this additional regularization is important and helps bypass bad local optima and improve performance. These important differences with past work suggest that a probabilistic view of the unsupervised sequence transduction may provide helpful guidance in determining effective training objectives. 5 EXPERIMENTS We test our model on five style transfer tasks: sentiment transfer, word substitution decipherment, formality transfer, author imitation, and related language translation. For completeness, we also evaluate on the task of general unsupervised machine translation using standard benchmarks. We compare with the unsupervised machine translation model (UNMT) which recently demonstrated state-of-the-art performance on transfer tasks such as sentiment and gender transfer (Lample et al., 2019).6 To validate the effect of the negative entropy term in the KL loss term Eq. 5, we remove it and train the model with a back-translation loss plus a language model negative log likelihood loss (which we denote as BT+NLL) as an ablation baseline. For each task, we also include strong baseline numbers from related work if available. For our method we select the model with the best validation ELBO, and for UNMT or BT+NLL we select the model with the best back-translation loss. Complete model configurations and hyperparameters can be found in Appendix A.1. 5.1 DATASETS AND EXPERIMENT SETUP Word Substitution Decipherment. Word decipherment aims to uncover the plain text behind a corpus that was enciphered via word substitution where word in the vocabulary is mapped to a unique type in a cipher dictionary (Dou & Knight, 2012; Shen et al., 2017; Yang et al., 2018). In our formulation, the model is presented with a non-parallel corpus of English plaintext and the ciphertext. We use the data in (Yang et al., 2018) which provides 200K sentences from each domain. While previous work (Shen et al., 2017; Yang et al., 2018) controls the difficulty of this task by varying the percentage of words that are ciphered, we directly evaluate on the most difficult version of this task – 100% of the words are enciphered (i.e. no vocabulary sharing in the two domains). We select the model with the best unsupervised reconstruction loss, and evaluate with BLEU score on the test set which contains 100K parallel sentences. Results are shown in Table 2. Sentiment Transfer. Sentiment transfer is a task of paraphrasing a sentence with a different sentiment while preserving the original content. Evaluation of sentiment transfer is difficult and is still an open research problem (Mir et al., 2019). Evaluation focuses on three aspects: attribute control, content preservation, and fluency. A successful system needs to perform well with respect to all three aspects. We follow prior work by using three automatic metrics (Yang et al., 2018; Lample et al., 2019): classification accuracy, self-BLEU (BLEU of the output with the original sentence as the reference), and the perplexity (PPL) of each system’s output under an external language model. We pretrain a convolutional classifier (Kim, 2014) to assess classification accuracy, and use an LSTM language model pretrained on each domain to compute the PPL of system outputs. We use the Yelp reviews dataset collected by Shen et al. (2017) which contains 250K negative sentences and 380K positive sentences. We also use a small test set that has 1000 human-annotated parallel sentences introduced in Li et al. (2018). We denote the positive sentiment as domain D1 and the negative sentiment as domain D2. We use Self-BLEU and BLEU to represent the BLEU score of the output against the original sentence and the reference respectively. Results are shown in Table 1. Formality Transfer. Next, we consider a harder task of modifying the formality of a sequence. We use the GYAFC dataset (Rao & Tetreault, 2018), which contains formal and informal sentences from two different domains. In this paper, we use the Entertainment and Music domain, which has about 52K training sentences, 5K development sentences, and 2.5K test sentences. This dataset actually contains parallel data between formal and informal sentences, which we use only for evaluation. We follow the evaluation of sentiment transfer task and test models on three axes. Since the test set is 6The model they used is slightly different from the original model of Lample et al. (2018) in certain details – e.g. the addition of a pooling layer after attention. We re-implement their model in our codebase for fair comparison and verify that our re-implementation achieves performance competitive with the original paper. a parallel corpus, we only compute reference BLEU and ignore self-BLEU. We use D1 to denote formal text, and D2 to denote informal text. Results are shown in Table 1. Author Imitation. Author imitation is the task of paraphrasing a sentence to match another author’s style. The dataset we use is a collection of Shakespeare’s plays translated line by line into modern English. It was collected by Xu et al. (2012)7 and used in prior work on supervised style transfer (Jhamtani et al., 2017). This is a parallel corpus and thus we follow the setting in the formality transfer task. We use D1 to denote modern English, and D2 to denote Shakespeare-style English. Results are shown in Table 1. Related Language Translation. Next, we test our method on a challenging related language translation task (Pourdamghani & Knight, 2017; Yang et al., 2018). This task is a natural test bed for unsupervised sequence transduction since the goal is to preserve the meaning of the source sentence while rewriting it into the target language. For our experiments, we choose Bosnian (bs) and Serbian (sr) as the related language pairs. We follow Yang et al. (2018) to report BLEU-1 score on this task since BLEU-4 score is close to zero. Results are shown in Table 2. Unsupervised MT. In order to draw connections with a related work on general unsupervised machine translation, we also evaluate on the WMT’16 German English translation task. This task is substantially more difficult than the style transfer tasks considered so far. We compare with the state-of-the-art UNMT system using the existing implementation from the XLM codebase,8 and implement our approach in the same framework with XLM initialization for fair comparison. We train both systems on 5M non-parallel sentences from each language. Results are shown in Table 2. In Tables 1 we also list the PPL of the test set under the external LM for both the source and target domain. PPL of system outputs should be compared to PPL of the test set itself because extremely low PPL often indicates that the generated sentences are short or trivial. 5.2 RESULTS Tables 1 and 2 demonstrate some general trends. First, UNMT is able to outperform other prior methods in unsupervised text style transfer, such as (Yang et al., 2018; Hu et al., 2017; Shen et al., 2017). The performance improvements of UNMT indicate that flexible and powerful 7https://github.com/tokestermw/tensorflow-shakespeare 8https://github.com/facebookresearch/XLM architectures are crucial (prior methods generally do not have an attention mechanism). Second, our model achieves comparable classification accuracy to UNMT but outperforms it in all style transfer tasks in terms of the reference-BLEU, which is the most important metric since it directly measures the quality of the final generations against gold parallel data. This indicates that our method is both effective and consistent across many different tasks. Finally, the BT+NLL baseline is sometimes quite competitive, which indicates that the addition of a language model alone can be beneficial. However, our method consistently outperforms the simple BT+NLL method, which indicates the effectiveness of the additional entropy regularizer in Eq. 5 that is the byproduct of our probabilistic formulation. Next, we examine the PPL of the system outputs under pretrained domain LMs, which should be evaluated in comparison with the PPL of the test set itself. For both the sentiment transfer and the formality transfer tasks in Table 1, BT+NLL achieves extremely low PPL, lower than the PPL of the test corpus in the target domain. After a close examination of the output, we find that it contains many repeated and overly simple outputs. For example, the system generates many examples of “I love this place” when transferring negative to positive sentiment (see Appendix A.3 for examples). It is not surprising that such a trivial output has low perplexity, high accuracy, and low BLEU score. On the other hand, our system obtains reasonably competitive PPL, and our approach achieves the highest accuracy and higher BLEU score than the UNMT baseline. 5.3 FURTHER ABLATIONS AND ANALYSIS Parameter Sharing. We also conducted an experiment on the word substitution decipherment task, where we remove parameter sharing (as explained in Section 3.2) between two directions of transduction distributions, and optimize two encoder-decoder instead. We found that the model only obtained an extremely low BLEU score and failed to generate any meaningful outputs. Performance vs. Domain Divergence. Figure 3 plots the relative improvement of our method over UNMT with respect to accuracy of a naive Bayes’ classifier trained to predict the domain of test sentences. Tasks with high classification accuracy likely have more divergent domains. We can see that for decipherment and en-de translation, where the domains have different vocabularies and thus are easily distinguished, our method yields a smaller gain over UNMT. This likely indicates that the (discrimination) regularization effect of the LM priors is less important or necessary when the two domains are very different. Why does the proposed model outperform UNMT? Finally, we examine in detail the output of our model and UNMT for the author imitation task. We pick this task because the reference outputs for the test set are provided, aiding analysis. Examples shown in Table 3 demonstrate that UNMT tends to make overly large changes to the source so that the original meaning is lost, while our method is better at preserving the content of the source sentence. Next, we quantitatively examine the outputs from UNMT and our method by comparing the F1 measure of words bucketed by their syntactic tags. We use the open-sourced compare-mt tool (Neubig et al., 2019), and the results are shown in Figure 4. Our system has outperforms UNMT in all word categories. In particular, our system is much better at generating nouns, which likely leads to better content preservation. CC DT IN J J NN NN P NN S PR P RB TO VB VB P VB Z ot he r labels 0.0 0.1 0.2 0.3 0.4 0.5 0.6 fm ea s UNMT Ours Figure 4: Word F1 score by POS tag. Table 3: Examples for author imitation task Methods Shakespeare to Modern Source Not to his father’s . Reference Not to his father’s house . UNMT Not to his brother . Ours Not to his father’s house . Source Send thy man away . Reference Send your man away . UNMT Send an excellent word . Ours Send your man away . Source Why should you fall into so deep an O ? Reference Why should you fall into so deep a moan ? UNMT Why should you carry so nicely , but have your legs ? Ours Why should you fall into so deep a sin ? Greedy vs. Sample-based Gradient Approximation. In our experiments, we use greedy decoding from the inference network to approximate the expectation required by ELBO, which is a biased estimator. The main purpose of this approach is to reduce the variance of the gradient estimator during training, especially in the early stages when the variance of sample-based approaches is quite high. As an ablation experiment on the sentiment transfer task we compare greedy and sample-based gradient approximations in terms of both train and test ELBO, as well as task performance corresponding to best test ELBO. After the model is fully trained, we find that the sample-based approximation has low variance. With a single sample, the standard deviation of the EBLO is less than 0.3 across 10 different test repetitions. All final reported ELBO values are all computed with this approach, regardless of whether the greedy approximation was used during training. The reported ELBO values are the evidence lower bound per word. Results are shown in Table 4, where the sampling-based training underperforms on both ELBO and task evaluations. 5.4 COMPARISON OF GRADIENT PROPAGATION METHODS As noted above, to stabilize the training process, we stop gradients from propagating to the inference network from the reconstruction loss. Does this approach indeed better optimize the actual probabilistic objective (i.e. ELBO) or only indirectly lead to improved task evaluations? In this section we use sentiment transfer as an example task to compare different methods for propagating gradients and evaluate both ELBO and task evaluations. Specifically, we compare three different methods: • Stop Gradient: The gradients from reconstruction loss are not propagated to the inference network. This is the method we use in all previous experiments. • Gumbel Softmax (Jang et al., 2017): Gradients from the reconstruction loss are propagated to the inference network with the straight-through Gumbel estimator. • REINFORCE (Sutton et al., 2000): Gradients from reconstruction loss are propagated to the inference network with ELBO as a reward function. This method has been used in previous work for semi-supervised sequence generation (Miao & Blunsom, 2016; Yin et al., 2018), but often suffers from instability issues. We report the train and test ELBO along with task evaluations in Table 5, and plot the learning curves on validation set in Figure 5.9 While being much simpler, we show that the stop-gradient trick produces superior ELBO over Gumbel Softmax and REINFORCE. This result suggests that stopping gradient helps better optimize the likelihood objective under our probabilistic formulation in comparison with other optimization techniques that propagate gradients, which is counter-intuitive. A likely explanation is that as a gradient estimator, while clearly biased, stop-gradient has substantially reduced variance. In comparison with other techniques that offer reduced bias but extremely high variance when applied to our model class (which involves discrete sequences as latent variables), stop-gradient actually leads to better optimization of our objective because it achieves better balance of bias and variance overall. 9We remove REINFORCE from this figure since it is very difficult to stabilize training and obtain reasonable results (e.g. the ELBO value is much worse than others in Table 5) 6 CONCLUSION We propose a probabilistic generative forumalation that unites past work on unsupervised text style transfer. We show that this probabilistic formulation provides a different way to reason about unsupervised objectives in this domain. Our model leads to substantial improvements on five text style transfer tasks, yielding bigger gains when the styles considered are more difficult to distinguish. ACKNOWLEDGEMENT The work of Junxian He and Xinyi Wang is supported by the DARPA GAILA project (award HR00111990063) and the Tang Family Foundation respectively. The authors would like to thank Zichao Yang for helpful feedback about the project. A APPENDIX A.1 MODEL CONFIGURATIONS. We adopt the following attentional encoder-decoder architecture for UNMT, BT+NLL, and our method across all the experiments: • We use word embeddings of size 128. • We use 1 layer LSTM with hidden size of 512 as both the encoder and decoder. • We apply dropout to the readout states before softmax with a rate of 0.3. • Following Lample et al. (2019), we add a max pooling operation over the encoder hidden states before feeding it to the decoder. Intuitively the pooling window size would control how much information is preserved during transduction. A window size of 1 is equivalent to standard attention mechanism, and a large window size corresponds to no attention. See Appendix A.2 for how to select the window size. • There is a noise function for UNMT baseline in its denoising autoencoder loss (Lample et al., 2017; 2019), which is critical for its success. We use the default noise function and noise hyperparameters in Lample et al. (2017) when running the UNMT model. For BT+NLL and our method we found that adding the extra noise into the self-reconstruction loss (Eq. 4) is only helpful when the two domains are relatively divergent (decipherment and related language translation tasks) where the language models play a less important role. Therefore, we add the default noise from UNMT to Eq. 4 for decipherment and related language translation tasks only, and do not use any noise for sentiment, author imitation, and formality tasks. A.2 HYPERPARAMETER TUNING. We vary pooling windows size as {1, 5}, the decaying patience hyperparameter k for selfreconstruction loss (Eq. 4) as {1, 2, 3}. For the baseliens UNMT and BT+NLL, we also try the option of not annealing the self-reconstruction loss at all as in the unsupervised machine translation task (Lample et al., 2018). We vary the weight λ for the NLL term (BT+NLL) or the KL term (our method) as {0.001, 0.01, 0.03, 0.05, 0.1}. A.3 SENTIMENT TRANSFER EXAMPLE OUTPUTS We list some examples of the sentiment transfer task in Table 6. Notably, the BT+NLL method tends to produce extremely short and simple sentences. A.4 REPETITIVE EXAMPLES OF BT+NLL In Section 5 we mentioned that the baseline BT+NLL has a low perplexity for some tasks because it tends to generate overly simple and repetitive sentences. From Table 1 we see that two representative tasks are sentiment transfer and formatliy transfer. In Appendix A.3 we have demonstrated some examples for sentiment transfer, next we show some repetitive samples of BT+NLL in Table 7.
1. What is the main contribution of the paper regarding sequence-to-sequence transfer? 2. What are the concerns regarding the gap between the probabilistic formulation and experimental implementation? 3. How does the reviewer assess the similarity between the proposed approach and prior works in unsupervised neural machine translation? 4. What are the strengths and weaknesses of the experimental setup and results? 5. How does the reviewer view the overall quality and impact of the paper?
Review
Review The main contribution of this paper is a principled probabilistic framework of unsupervised sequence to sequence transfer (text to text in particular). However, I believe there is a large disconnect between the probabilistic formulation written it section 3 and whats actually happening experimentally in section 5. It is not clear whether the model is *actually* optimizing an ELBO because the gradients from sequence reconstruction loss are not backpropogated to the inference network as explained in paragraph on Approximating Gradients of ELBO. Moreover this restriction makes the authors method almost the same as the one used for unsupervised neural machine translation by Lample et al 2017 and Artetxe et al 2017. I would like to see a more detailed analysis from authors on how far the performance of Gumbel-softmax estimator and REINFORCE estimator is from simple stop-gradient estimator used in experiments. In terms of experimental setup I like that the authors considered a large suite of experiments across various tasks. Although the evaluation metrics on text style transfer tasks like sentiment transfer, formality transfer, author imitation are in line with previous work ideally the human evaluation needs to be done to truly see how well each method performs. On unsupervised machine translation, authors show a large improvement on Serbian-Bostian translation. I am a bit skeptical since as I wrote above the proposed method is very similar to previously proposed unsupervised neural machine translation approaches and it is not clear why we are seeking such a large gain of 5 BLEU points. Overall I think it is a well written paper with a large experimental suite, although I am skeptical of actual connection between probabilistic formulation and whats actually happening in practice. ================================================ Update: I have raised the score from 3 to 6.
ICLR
Title A Probabilistic Formulation of Unsupervised Text Style Transfer Abstract We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.1 N/A We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.1 1 INTRODUCTION Text sequence transduction systems convert a given text sequence from one domain to another. These techniques can be applied to a wide range of natural language processing applications such as machine translation (Bahdanau et al., 2015), summarization (Rush et al., 2015), and dialogue response generation (Zhao et al., 2017). In many cases, however, parallel corpora for the task at hand are scarce. Therefore, unsupervised sequence transduction methods that require only non-parallel data are appealing and have been receiving growing attention (Bannard & Callison-Burch, 2005; Ravi & Knight, 2011; Mizukami et al., 2015; Shen et al., 2017; Lample et al., 2018; 2019). This trend is most pronounced in the space of text style transfer tasks where parallel data is particularly challenging to obtain (Hu et al., 2017; Shen et al., 2017; Yang et al., 2018). Style transfer has historically referred to sequence transduction problems that modify superficial properties of text – i.e. style rather than content.2 We focus on a standard suite of style transfer tasks, including formality transfer (Rao & Tetreault, 2018), author imitation (Xu et al., 2012), word decipherment (Shen et al., 2017), sentiment transfer (Shen et al., 2017), and related language translation (Pourdamghani & Knight, 2017). General unsupervised translation has not typically been considered style transfer, but for the purpose of comparison we also conduct evaluation on this task (Lample et al., 2017). ∗Equal Contribution. 1Code and data are available at https://github.com/cindyxinyiwang/deep-latent-sequence-model. 2Notably, some tasks we evaluate on do change content to some degree, such as sentiment transfer, but for conciseness we use the term “style transfer” nonetheless. ar X iv :2 00 2. 03 91 2v 1 [ cs .C L ] 1 0 Fe b 20 20 Recent work on unsupervised text style transfer mostly employs non-generative or non-probabilistic modeling approaches. For example, Shen et al. (2017) and Yang et al. (2018) design adversarial discriminators to shape their unsupervised objective – an approach that can be effective, but often introduces training instability. Other work focuses on directly designing unsupervised training objectives by incorporating intuitive loss terms (e.g. backtranslation loss), and demonstrates state-ofthe-art performance on unsupervised machine translation (Lample et al., 2018; Artetxe et al., 2019) and style transfer (Lample et al., 2019). However, the space of possible unsupervised objectives is extremely large and the underlying modeling assumptions defined by each objective can only be reasoned about indirectly. As a result, the process of designing such systems is often heuristic. In contrast, probabilistic models (e.g. the noisy channel model (Shannon, 1948)) define assumptions about data more explicitly and allow us to reason about these assumptions during system design. Further, the corresponding objectives are determined naturally by principles of probabilistic inference, reducing the need for empirical search directly in the space of possible objectives. That said, classical probabilistic models for unsupervised sequence transduction (e.g. the HMM or semi-HMM) typically enforce overly strong independence assumptions about data to make exact inference tractable (Knight et al., 2006; Ravi & Knight, 2011; Pourdamghani & Knight, 2017). This has restricted their development and caused their performance to lag behind unsupervised neural objectives on complex tasks. Luckily, in recent years, powerful variational approximation techniques have made it more practical to train probabilistic models without strong independence assumptions (Miao & Blunsom, 2016; Yin et al., 2018). Inspired by this, we take a new approach to unsupervised style transfer. We directly define a generative probabilistic model that treats a non-parallel corpus in two domains as a partially observed parallel corpus. Our model makes few independence assumptions and its true posterior is intractable. However, we show that by using amortized variational inference (Kingma & Welling, 2013), a principled probabilistic technique, a natural unsupervised objective falls out of our modeling approach that has many connections with past work, yet is different from all past work in specific ways. In experiments across a suite of unsupervised text style transfer tasks, we find that the natural objective of our model actually outperforms all manually defined unsupervised objectives from past work, supporting the notion that probabilistic principles can be a useful guide even in deep neural systems. Further, in the case of unsupervised machine translation, our model matches the current state-of-the-art non-generative approach. 2 UNSUPERVISED TEXT STYLE TRANSFER We first overview text style transfer, which aims to transfer a text (typically a single sentence or a short paragraph – for simplicity we refer to simply “sentences” below) from one domain to another while preserving underlying content. For example, formality transfer (Rao & Tetreault, 2018) is the task of transforming the tone of text from informal to formal without changing its content. Other examples include sentiment transfer (Shen et al., 2017), word decipherment (Knight et al., 2006), and author imitation (Xu et al., 2012). If parallel examples were available from each domain (i.e. the training data is a bitext consisting of pairs of sentences from each domain), supervised techniques could be used to perform style transfer (e.g. attentional Seq2Seq (Bahdanau et al., 2015) and Transformer (Vaswani et al., 2017)). However, for most style transfer problems, only non-parallel corpora (one corpus from each domain) can be easily collected. Thus, work on style transfer typically focuses on the more difficult unsupervised setting where systems must learn from non-parallel data alone. The model we propose treats an observed non-parallel text corpus as a partially observed parallel corpus. Thus, we introduce notation for both observed text inputs and those that we will treat as latent variables. Specifically, we let X = {x(1), x(2), · · · , x(m)} represent observed data from domain D1, while we let Y = {y(m+1), y(m+2), · · · , y(n)} represent observed data from domain D2. Corresponding indices represent parallel sentences. Thus, none of the observed sentences share indices. In our model, we introduce latent sentences to complete the parallel corpus. Specifically, X̄ = {x̄(m+1), x̄(m+2), · · · , x̄(n)} represents the set of latent parallel sentences in D1, while Ȳ = {ȳ(1), ȳ(2), · · · , ȳ(m)} represents the set of latent parallel sentences in D2. Then the goal of unsupervised text transduction is to infer these latent variables conditioned the observed non-parallel corpora; that is, to learn p(ȳ|x) and p(x̄|y). 3 THE DEEP LATENT SEQUENCE MODEL First we present our generative model of bitext, which we refer to as a deep latent sequence model. We then describe unsupervised learning and inference techniques for this model class. 3.1 MODEL STRUCTURE Directly modeling p(ȳ|x) and p(x̄|y) in the unsupervised setting is difficult because we never directly observe parallel data. Instead, we propose a generative model of the complete data that defines a joint likelihood, p(X, X̄, Y, Ȳ ). In order to perform text transduction, the unobserved halves can be treated as latent variables: they will be marginalized out during learning and inferred via posterior inference at test time. Our model assumes that each observed sentence is generated from an unobserved parallel sentence in the opposite domain, as depicted in Figure 1. Specifically, each sentence x(i) in domain D1 is generated as follows: First, a latent sentence ȳ(i) in domain D2 is sampled from a prior, pD2(ȳ(i)). Then, x(i) is sampled conditioned on ȳ(i) from a transduction model, p(x(i)|ȳ(i)). Similarly, each observed sentence y(j) in domain D2 is generated conditioned on a latent sentence, x̄(j), in domain D1 via the opposite transduction model, p(y(j)|x̄(j)), and prior, pD1(x̄(j)). We let θx|ȳ and θy|x̄ represent the parameters of the two transduction distributions respectively. We assume the prior distributions are pretrained on the observed data in their respective domains and therefore omit their parameters for simplicity of notation. Together, this gives the following joint likelihood: p(X, X̄, Y, Ȳ ; θx|ȳ, θy|x̄) = ( m∏ i=1 p ( x(i)|ȳ(i); θx|ȳ ) pD2 ( ȳ(i) ))( n∏ j=m+1 p ( y(j)|x̄(j); θy|x̄ ) pD1 ( x̄(j) )) (1) The log marginal likelihood of the data, which we will approximate during training, is: log p(X,Y ; θx|ȳ, θy|x̄) = log ∑ X̄ ∑ Ȳ p(X, X̄, Y, Ȳ ; θx|ȳ, θy|x̄) (2) Note that if the two transduction models share no parameters, the training problems for each observed domain are independent. Critically, we introduce parameter sharing through our variational inference procedure, which we describe in more detail in Section 3.2. Architecture: Since we would like to be able to model a variety of transfer tasks, we choose a parameterization for our transduction distributions that makes no independence assumptions. Specifically, we employ an encoder-decoder architecture based on the standard attentional Seq2Seq model which has been shown to be successful across various tasks (Bahdanau et al., 2015; Rush et al., 2015). Similarly, our prior distributions for each domain are parameterized as recurrent language models which, again, make no independence assumptions. In contrast, traditional unsupervised generative sequence models typically make strong independence assumptions to enable exact inference (e.g. the HMM makes a Markov assumption on the latent sequence and emissions are one-to-one). Our model is more flexible, but exact inference via dynamic programming will be intractable. We address this problem in the next section. 3.2 LEARNING Ideally, learning should directly optimize the log data likelihood, which is the marginal of our model shown in Eq. 2. However, due to our model’s neural parameterization which does not factorize, computing the data likelihood cannot be accomplished using dynamic programming as can be done with simpler models like the HMM. To overcome the intractability of computing the true data likelihood, we adopt amortized variational inference (Kingma & Welling, 2013) in order to derive a surrogate objective for learning, the evidence lower bound (ELBO) on log marginal likelihood3 : log p(X,Y ; θx|ȳ, θy|x̄) ≥ LELBO(X,Y ; θx|ȳ, θy|x̄, φx̄|y, φȳ|x) = ∑ i [ Eq(ȳ|x(i);φȳ|x)[log p(x (i)|ȳ; θx|ȳ)]−DKL ( q(ȳ|x(i);φȳ|x)||pD2(ȳ) )] + ∑ j [ Eq(x̄|y(j);φx̄|y)[log p(y (j)|x̄; θy|x̄)]︸ ︷︷ ︸ Reconstruction likelihood −DKL ( q(x̄|y(j);φx̄|y)||pD1(x̄) )] ︸ ︷︷ ︸ KL regularizer (3) The surrogate objective introduces q(ȳ|x(i);φȳ|x) and q(x̄|y(j);φx̄|y), which represent two separate inference network distributions that approximate the model’s true posteriors, p(ȳ|x(i); θx|ȳ) and p(x̄|y(j); θy|x̄), respectively. Learning operates by jointly optimizing the lower bound over both variational and model parameters. Once trained, the variational posterior distributions can be used directly for style transfer. The KL terms in Eq. 3, that appear naturally in the ELBO objective, can be intuitively viewed as regularizers that use the language model priors to bias the induced sentences towards the desired domains. Amortized variational techniques have been most commonly applied to continuous latent variables, as in the case of the variational autoencoder (VAE) (Kingma & Welling, 2013). Here, we use this approach for inference over discrete sequences, which has been shown to be effective in related work on a semi-supervised task (Miao & Blunsom, 2016). Inference Network and Parameter Sharing: Note that the approximate posterior on one domain aims to learn the reverse style transfer distribution, which is exactly the goal of the generative distribution in the opposite domain. For example, the inference network q(ȳ|x(i);φȳ|x) and the generative distribution p(y|x̄(i); θy|x̄) both aim to transform D1 to D2. Therefore, we use the same architecture for each inference network as used in the transduction models, and tie their parameters: φx̄|y = θx|ȳ, φȳ|x = θy|x̄. This means we learn only two encoder-decoders overall – which are parameterized by θx|ȳ and θy|x̄ respectively – to represent two directions of transfer. In addition to reducing the number of learnable parameters, this parameter tying couples the learning problems for both domains and allows us to jointly learn from the full data. Moreover, inspired by recent work that 3Note that in practice, we add a weight λ (the same to both domains) to the KL term in ELBO since the regularization strength from the pretrained language model varies depending on the datasets, training data size, or language model structures. Such reweighting has proven necessary in previous work that is trained with ELBO (Bowman et al., 2016; Miao & Blunsom, 2016; Yin et al., 2018). builds a universal Seq2Seq model to translate between different language pairs (Johnson et al., 2017), we introduce further parameter tying between the two directions of transduction: the same encoder is employed for both x and y, and a domain embedding c is provided to the same decoder to specify the transfer direction, as shown in Figure 2. Ablation analysis in Section 5.3 suggests that parameter sharing is important to achieve good performance. Approximating Gradients of ELBO: The reconstruction and KL terms in Eq. 3 still involve intractable expectations due to the marginalization over the latent sequence, thus we need to approximate their gradients. Gumbel-softmax (Jang et al., 2017) and REINFORCE (Sutton et al., 2000) are often used as stochastic gradient estimators in the discrete case. Since the latent text variables have an extremely large domain, we find that REINFORCE-based gradient estimates result in high variance. Thus, we use the Gumbel-softmax straight-through estimator to backpropagate gradients from the KL terms.4 However, we find that approximating gradients of the reconstruction loss is much more challenging – both the Gumbel-softmax estimator and REINFORCE are unable to outperform a simple stop-gradient method that does not back-propagate the gradient of the latent sequence to the inference network. This confirms a similar observation in previous work on unsupervised machine translation (Lample et al., 2018). Therefore, we use greedy decoding without recording gradients to approximate the reconstruction term.5 Note that the inference networks still receive gradients from the prior through the KL term, and their parameters are shared with the decoders which do receive gradients from reconstruction. We consider this to be the best empirical compromise at the moment. Initialization. Good initialization is often necessary for successful optimization of unsupervised learning objectives. In preliminary experiments, we find that the encoder-decoder structure has difficulty generating realistic sentences during the initial stages of training, which usually results in a disastrous local optimum. This is mainly because the encoder-decoder is initialized randomly and there is no direct training signal to specify the desired latent sequence in the unsupervised setting. Therefore, we apply a self-reconstruction loss Lrec at the initial epochs of training. We denote the output the encoder as e(·) and the decoder distribution as pdec, then Lrec = −α · ∑ i [pdec(e(x (i), cx)]− α · ∑ j [pdec(e(y (j), cy)], (4) α decays from 1.0 to 0.0 linearly in the first k epochs. k is a tunable parameter and usually less than 3 in all our experiments. 4 CONNECTION TO RELATED WORK Our probabilistic formulation can be connected with recent advances in unsupervised text transduction methods. For example, back translation loss (Sennrich et al., 2016) plays an important role in recent unsupervised machine translation (Artetxe et al., 2018; Lample et al., 2018; Artetxe et al., 2019) and unsupervised style transfer systems (Lample et al., 2019). In order to incorporate back translation loss the source language x is translated to the target language y to form a pseudo-parallel corpus, then a translation model from y to x can be learned on this pseudo bitext just as in supervised setting. While back translation was often explained as a data augmentation technique, in our probabilistic formulation it appears naturally with the ELBO objective as the reconstruction loss term. Some previous work has incorporated a pretrained language models into neural semi-supervised or unsupervised objectives. He et al. (2016) uses the log likelihood of a pretrained language model as the reward to update a supervised machine translation system with policy gradient. Artetxe et al. (2019) utilize a similar idea for unsupervised machine translation. Yang et al. (2018) employed a similar approach, but interpret the LM as an adversary, training the generator to fool the LM. We show how our ELBO objective is connected with these more heuristic LM regularizers by expanding the KL loss term (assume x is observed): DKL(q(ȳ|x)||pD2(ȳ)) = −Hq − Eq[log pD2(ȳ)], (5) Note that the loss used in previous work does not include the negative entropy term, −Hq. Our objective results in this additional “regularizer”, the negative entropy of the transduction distribution, −Hq. Intuitively, −Hq helps avoid a peaked transduction distribution, preventing the transduction 4We use one sample to approximate the expectations. 5We compare greedy and sampling decoding in Section 5.3. from constantly generating similar sentences to satisfy the language model. In experiments we will show that this additional regularization is important and helps bypass bad local optima and improve performance. These important differences with past work suggest that a probabilistic view of the unsupervised sequence transduction may provide helpful guidance in determining effective training objectives. 5 EXPERIMENTS We test our model on five style transfer tasks: sentiment transfer, word substitution decipherment, formality transfer, author imitation, and related language translation. For completeness, we also evaluate on the task of general unsupervised machine translation using standard benchmarks. We compare with the unsupervised machine translation model (UNMT) which recently demonstrated state-of-the-art performance on transfer tasks such as sentiment and gender transfer (Lample et al., 2019).6 To validate the effect of the negative entropy term in the KL loss term Eq. 5, we remove it and train the model with a back-translation loss plus a language model negative log likelihood loss (which we denote as BT+NLL) as an ablation baseline. For each task, we also include strong baseline numbers from related work if available. For our method we select the model with the best validation ELBO, and for UNMT or BT+NLL we select the model with the best back-translation loss. Complete model configurations and hyperparameters can be found in Appendix A.1. 5.1 DATASETS AND EXPERIMENT SETUP Word Substitution Decipherment. Word decipherment aims to uncover the plain text behind a corpus that was enciphered via word substitution where word in the vocabulary is mapped to a unique type in a cipher dictionary (Dou & Knight, 2012; Shen et al., 2017; Yang et al., 2018). In our formulation, the model is presented with a non-parallel corpus of English plaintext and the ciphertext. We use the data in (Yang et al., 2018) which provides 200K sentences from each domain. While previous work (Shen et al., 2017; Yang et al., 2018) controls the difficulty of this task by varying the percentage of words that are ciphered, we directly evaluate on the most difficult version of this task – 100% of the words are enciphered (i.e. no vocabulary sharing in the two domains). We select the model with the best unsupervised reconstruction loss, and evaluate with BLEU score on the test set which contains 100K parallel sentences. Results are shown in Table 2. Sentiment Transfer. Sentiment transfer is a task of paraphrasing a sentence with a different sentiment while preserving the original content. Evaluation of sentiment transfer is difficult and is still an open research problem (Mir et al., 2019). Evaluation focuses on three aspects: attribute control, content preservation, and fluency. A successful system needs to perform well with respect to all three aspects. We follow prior work by using three automatic metrics (Yang et al., 2018; Lample et al., 2019): classification accuracy, self-BLEU (BLEU of the output with the original sentence as the reference), and the perplexity (PPL) of each system’s output under an external language model. We pretrain a convolutional classifier (Kim, 2014) to assess classification accuracy, and use an LSTM language model pretrained on each domain to compute the PPL of system outputs. We use the Yelp reviews dataset collected by Shen et al. (2017) which contains 250K negative sentences and 380K positive sentences. We also use a small test set that has 1000 human-annotated parallel sentences introduced in Li et al. (2018). We denote the positive sentiment as domain D1 and the negative sentiment as domain D2. We use Self-BLEU and BLEU to represent the BLEU score of the output against the original sentence and the reference respectively. Results are shown in Table 1. Formality Transfer. Next, we consider a harder task of modifying the formality of a sequence. We use the GYAFC dataset (Rao & Tetreault, 2018), which contains formal and informal sentences from two different domains. In this paper, we use the Entertainment and Music domain, which has about 52K training sentences, 5K development sentences, and 2.5K test sentences. This dataset actually contains parallel data between formal and informal sentences, which we use only for evaluation. We follow the evaluation of sentiment transfer task and test models on three axes. Since the test set is 6The model they used is slightly different from the original model of Lample et al. (2018) in certain details – e.g. the addition of a pooling layer after attention. We re-implement their model in our codebase for fair comparison and verify that our re-implementation achieves performance competitive with the original paper. a parallel corpus, we only compute reference BLEU and ignore self-BLEU. We use D1 to denote formal text, and D2 to denote informal text. Results are shown in Table 1. Author Imitation. Author imitation is the task of paraphrasing a sentence to match another author’s style. The dataset we use is a collection of Shakespeare’s plays translated line by line into modern English. It was collected by Xu et al. (2012)7 and used in prior work on supervised style transfer (Jhamtani et al., 2017). This is a parallel corpus and thus we follow the setting in the formality transfer task. We use D1 to denote modern English, and D2 to denote Shakespeare-style English. Results are shown in Table 1. Related Language Translation. Next, we test our method on a challenging related language translation task (Pourdamghani & Knight, 2017; Yang et al., 2018). This task is a natural test bed for unsupervised sequence transduction since the goal is to preserve the meaning of the source sentence while rewriting it into the target language. For our experiments, we choose Bosnian (bs) and Serbian (sr) as the related language pairs. We follow Yang et al. (2018) to report BLEU-1 score on this task since BLEU-4 score is close to zero. Results are shown in Table 2. Unsupervised MT. In order to draw connections with a related work on general unsupervised machine translation, we also evaluate on the WMT’16 German English translation task. This task is substantially more difficult than the style transfer tasks considered so far. We compare with the state-of-the-art UNMT system using the existing implementation from the XLM codebase,8 and implement our approach in the same framework with XLM initialization for fair comparison. We train both systems on 5M non-parallel sentences from each language. Results are shown in Table 2. In Tables 1 we also list the PPL of the test set under the external LM for both the source and target domain. PPL of system outputs should be compared to PPL of the test set itself because extremely low PPL often indicates that the generated sentences are short or trivial. 5.2 RESULTS Tables 1 and 2 demonstrate some general trends. First, UNMT is able to outperform other prior methods in unsupervised text style transfer, such as (Yang et al., 2018; Hu et al., 2017; Shen et al., 2017). The performance improvements of UNMT indicate that flexible and powerful 7https://github.com/tokestermw/tensorflow-shakespeare 8https://github.com/facebookresearch/XLM architectures are crucial (prior methods generally do not have an attention mechanism). Second, our model achieves comparable classification accuracy to UNMT but outperforms it in all style transfer tasks in terms of the reference-BLEU, which is the most important metric since it directly measures the quality of the final generations against gold parallel data. This indicates that our method is both effective and consistent across many different tasks. Finally, the BT+NLL baseline is sometimes quite competitive, which indicates that the addition of a language model alone can be beneficial. However, our method consistently outperforms the simple BT+NLL method, which indicates the effectiveness of the additional entropy regularizer in Eq. 5 that is the byproduct of our probabilistic formulation. Next, we examine the PPL of the system outputs under pretrained domain LMs, which should be evaluated in comparison with the PPL of the test set itself. For both the sentiment transfer and the formality transfer tasks in Table 1, BT+NLL achieves extremely low PPL, lower than the PPL of the test corpus in the target domain. After a close examination of the output, we find that it contains many repeated and overly simple outputs. For example, the system generates many examples of “I love this place” when transferring negative to positive sentiment (see Appendix A.3 for examples). It is not surprising that such a trivial output has low perplexity, high accuracy, and low BLEU score. On the other hand, our system obtains reasonably competitive PPL, and our approach achieves the highest accuracy and higher BLEU score than the UNMT baseline. 5.3 FURTHER ABLATIONS AND ANALYSIS Parameter Sharing. We also conducted an experiment on the word substitution decipherment task, where we remove parameter sharing (as explained in Section 3.2) between two directions of transduction distributions, and optimize two encoder-decoder instead. We found that the model only obtained an extremely low BLEU score and failed to generate any meaningful outputs. Performance vs. Domain Divergence. Figure 3 plots the relative improvement of our method over UNMT with respect to accuracy of a naive Bayes’ classifier trained to predict the domain of test sentences. Tasks with high classification accuracy likely have more divergent domains. We can see that for decipherment and en-de translation, where the domains have different vocabularies and thus are easily distinguished, our method yields a smaller gain over UNMT. This likely indicates that the (discrimination) regularization effect of the LM priors is less important or necessary when the two domains are very different. Why does the proposed model outperform UNMT? Finally, we examine in detail the output of our model and UNMT for the author imitation task. We pick this task because the reference outputs for the test set are provided, aiding analysis. Examples shown in Table 3 demonstrate that UNMT tends to make overly large changes to the source so that the original meaning is lost, while our method is better at preserving the content of the source sentence. Next, we quantitatively examine the outputs from UNMT and our method by comparing the F1 measure of words bucketed by their syntactic tags. We use the open-sourced compare-mt tool (Neubig et al., 2019), and the results are shown in Figure 4. Our system has outperforms UNMT in all word categories. In particular, our system is much better at generating nouns, which likely leads to better content preservation. CC DT IN J J NN NN P NN S PR P RB TO VB VB P VB Z ot he r labels 0.0 0.1 0.2 0.3 0.4 0.5 0.6 fm ea s UNMT Ours Figure 4: Word F1 score by POS tag. Table 3: Examples for author imitation task Methods Shakespeare to Modern Source Not to his father’s . Reference Not to his father’s house . UNMT Not to his brother . Ours Not to his father’s house . Source Send thy man away . Reference Send your man away . UNMT Send an excellent word . Ours Send your man away . Source Why should you fall into so deep an O ? Reference Why should you fall into so deep a moan ? UNMT Why should you carry so nicely , but have your legs ? Ours Why should you fall into so deep a sin ? Greedy vs. Sample-based Gradient Approximation. In our experiments, we use greedy decoding from the inference network to approximate the expectation required by ELBO, which is a biased estimator. The main purpose of this approach is to reduce the variance of the gradient estimator during training, especially in the early stages when the variance of sample-based approaches is quite high. As an ablation experiment on the sentiment transfer task we compare greedy and sample-based gradient approximations in terms of both train and test ELBO, as well as task performance corresponding to best test ELBO. After the model is fully trained, we find that the sample-based approximation has low variance. With a single sample, the standard deviation of the EBLO is less than 0.3 across 10 different test repetitions. All final reported ELBO values are all computed with this approach, regardless of whether the greedy approximation was used during training. The reported ELBO values are the evidence lower bound per word. Results are shown in Table 4, where the sampling-based training underperforms on both ELBO and task evaluations. 5.4 COMPARISON OF GRADIENT PROPAGATION METHODS As noted above, to stabilize the training process, we stop gradients from propagating to the inference network from the reconstruction loss. Does this approach indeed better optimize the actual probabilistic objective (i.e. ELBO) or only indirectly lead to improved task evaluations? In this section we use sentiment transfer as an example task to compare different methods for propagating gradients and evaluate both ELBO and task evaluations. Specifically, we compare three different methods: • Stop Gradient: The gradients from reconstruction loss are not propagated to the inference network. This is the method we use in all previous experiments. • Gumbel Softmax (Jang et al., 2017): Gradients from the reconstruction loss are propagated to the inference network with the straight-through Gumbel estimator. • REINFORCE (Sutton et al., 2000): Gradients from reconstruction loss are propagated to the inference network with ELBO as a reward function. This method has been used in previous work for semi-supervised sequence generation (Miao & Blunsom, 2016; Yin et al., 2018), but often suffers from instability issues. We report the train and test ELBO along with task evaluations in Table 5, and plot the learning curves on validation set in Figure 5.9 While being much simpler, we show that the stop-gradient trick produces superior ELBO over Gumbel Softmax and REINFORCE. This result suggests that stopping gradient helps better optimize the likelihood objective under our probabilistic formulation in comparison with other optimization techniques that propagate gradients, which is counter-intuitive. A likely explanation is that as a gradient estimator, while clearly biased, stop-gradient has substantially reduced variance. In comparison with other techniques that offer reduced bias but extremely high variance when applied to our model class (which involves discrete sequences as latent variables), stop-gradient actually leads to better optimization of our objective because it achieves better balance of bias and variance overall. 9We remove REINFORCE from this figure since it is very difficult to stabilize training and obtain reasonable results (e.g. the ELBO value is much worse than others in Table 5) 6 CONCLUSION We propose a probabilistic generative forumalation that unites past work on unsupervised text style transfer. We show that this probabilistic formulation provides a different way to reason about unsupervised objectives in this domain. Our model leads to substantial improvements on five text style transfer tasks, yielding bigger gains when the styles considered are more difficult to distinguish. ACKNOWLEDGEMENT The work of Junxian He and Xinyi Wang is supported by the DARPA GAILA project (award HR00111990063) and the Tang Family Foundation respectively. The authors would like to thank Zichao Yang for helpful feedback about the project. A APPENDIX A.1 MODEL CONFIGURATIONS. We adopt the following attentional encoder-decoder architecture for UNMT, BT+NLL, and our method across all the experiments: • We use word embeddings of size 128. • We use 1 layer LSTM with hidden size of 512 as both the encoder and decoder. • We apply dropout to the readout states before softmax with a rate of 0.3. • Following Lample et al. (2019), we add a max pooling operation over the encoder hidden states before feeding it to the decoder. Intuitively the pooling window size would control how much information is preserved during transduction. A window size of 1 is equivalent to standard attention mechanism, and a large window size corresponds to no attention. See Appendix A.2 for how to select the window size. • There is a noise function for UNMT baseline in its denoising autoencoder loss (Lample et al., 2017; 2019), which is critical for its success. We use the default noise function and noise hyperparameters in Lample et al. (2017) when running the UNMT model. For BT+NLL and our method we found that adding the extra noise into the self-reconstruction loss (Eq. 4) is only helpful when the two domains are relatively divergent (decipherment and related language translation tasks) where the language models play a less important role. Therefore, we add the default noise from UNMT to Eq. 4 for decipherment and related language translation tasks only, and do not use any noise for sentiment, author imitation, and formality tasks. A.2 HYPERPARAMETER TUNING. We vary pooling windows size as {1, 5}, the decaying patience hyperparameter k for selfreconstruction loss (Eq. 4) as {1, 2, 3}. For the baseliens UNMT and BT+NLL, we also try the option of not annealing the self-reconstruction loss at all as in the unsupervised machine translation task (Lample et al., 2018). We vary the weight λ for the NLL term (BT+NLL) or the KL term (our method) as {0.001, 0.01, 0.03, 0.05, 0.1}. A.3 SENTIMENT TRANSFER EXAMPLE OUTPUTS We list some examples of the sentiment transfer task in Table 6. Notably, the BT+NLL method tends to produce extremely short and simple sentences. A.4 REPETITIVE EXAMPLES OF BT+NLL In Section 5 we mentioned that the baseline BT+NLL has a low perplexity for some tasks because it tends to generate overly simple and repetitive sentences. From Table 1 we see that two representative tasks are sentiment transfer and formatliy transfer. In Appendix A.3 we have demonstrated some examples for sentiment transfer, next we show some repetitive samples of BT+NLL in Table 7.
1. What is the main contribution of the paper in unsupervised text style transfer? 2. What are the strengths of the proposed approach, particularly in its simplicity and elegance? 3. What are the weaknesses of the paper regarding its claims and experiments? 4. How does the reviewer assess the novelty and relevance of the connections to back-translation and language models? 5. What are the concerns regarding the method's ability to optimize the ELBO, and how does the reviewer suggest addressing them?
Review
Review In this paper, the authors propose a probabilistic framework for unsupervised text style transfer. Given two non-parallel corpora X,Y in different domains, the authors introduce unobserved corpora \bar{X}, \bar{Y}. These are used as latent variables that control the generation of the observed data. To train models, the paper proposes to optimize the evidence lower bound of the log marginal likelihood. To facilitate training, multiple techniques are suggested, such as parameter sharing, some gradient approximations and initialization with a reconstruction objective. The approach is evaluated on five style transfer tasks, as well as unsupervised machine translation. Models are evaluated with multiple metrics, and generally obtain reasonably strong performance. I lean towards the acceptance of the paper because the approach is fairly simple and elegant, while obtaining promising results. The connections to back-translation and language models are also potentially interesting. However, while the paper aims to suggest a principled approach to style transfer, using greedy samples biases the reconstruction objective, and as such the method does not really optimize the ELBO. Casting style transfer as data completion is a straight-forward idea that doesn't introduce unnecessary or too simplistic assumptions. Optimizing the ELBO follows naturally, and can lead to more diverse outputs than the BT+NLL approach, which misses the negative entropy term. Reference BLEU scores on all tasks are competitive, and sometimes clearly better, with strong baselines. Greedily sampling latent sequences during training should ideally be justified more carefully as it biases the objective function. In particular, an experimental comparison to stochastic sampling, which should more closely approximate the expectation, would be appreciated. Additionally, detailing the similarities and differences between the proposed approach and current UNMT techniques could be helpful to some readers. Questions: Could you present the validation and test evidence lower bounds? If so, how is sampling performed? In footnote 2, you mention tuning the strength of the KL regularizer. As the KL can be decomposed into 2 terms (Eq. 5), would it be beneficial to control each term separately?
ICLR
Title A Probabilistic Formulation of Unsupervised Text Style Transfer Abstract We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.1 N/A We present a deep generative model for unsupervised text style transfer that unifies previously proposed non-generative techniques. Our probabilistic approach models non-parallel data from two domains as a partially observed parallel corpus. By hypothesizing a parallel latent sequence that generates each observed sequence, our model learns to transform sequences from one domain to another in a completely unsupervised fashion. In contrast with traditional generative sequence models (e.g. the HMM), our model makes few assumptions about the data it generates: it uses a recurrent language model as a prior and an encoder-decoder as a transduction distribution. While computation of marginal data likelihood is intractable in this model class, we show that amortized variational inference admits a practical surrogate. Further, by drawing connections between our variational objective and other recent unsupervised style transfer and machine translation techniques, we show how our probabilistic view can unify some known non-generative objectives such as backtranslation and adversarial loss. Finally, we demonstrate the effectiveness of our method on a wide range of unsupervised style transfer tasks, including sentiment transfer, formality transfer, word decipherment, author imitation, and related language translation. Across all style transfer tasks, our approach yields substantial gains over state-of-the-art non-generative baselines, including the state-of-the-art unsupervised machine translation techniques that our approach generalizes. Further, we conduct experiments on a standard unsupervised machine translation task and find that our unified approach matches the current state-of-the-art.1 1 INTRODUCTION Text sequence transduction systems convert a given text sequence from one domain to another. These techniques can be applied to a wide range of natural language processing applications such as machine translation (Bahdanau et al., 2015), summarization (Rush et al., 2015), and dialogue response generation (Zhao et al., 2017). In many cases, however, parallel corpora for the task at hand are scarce. Therefore, unsupervised sequence transduction methods that require only non-parallel data are appealing and have been receiving growing attention (Bannard & Callison-Burch, 2005; Ravi & Knight, 2011; Mizukami et al., 2015; Shen et al., 2017; Lample et al., 2018; 2019). This trend is most pronounced in the space of text style transfer tasks where parallel data is particularly challenging to obtain (Hu et al., 2017; Shen et al., 2017; Yang et al., 2018). Style transfer has historically referred to sequence transduction problems that modify superficial properties of text – i.e. style rather than content.2 We focus on a standard suite of style transfer tasks, including formality transfer (Rao & Tetreault, 2018), author imitation (Xu et al., 2012), word decipherment (Shen et al., 2017), sentiment transfer (Shen et al., 2017), and related language translation (Pourdamghani & Knight, 2017). General unsupervised translation has not typically been considered style transfer, but for the purpose of comparison we also conduct evaluation on this task (Lample et al., 2017). ∗Equal Contribution. 1Code and data are available at https://github.com/cindyxinyiwang/deep-latent-sequence-model. 2Notably, some tasks we evaluate on do change content to some degree, such as sentiment transfer, but for conciseness we use the term “style transfer” nonetheless. ar X iv :2 00 2. 03 91 2v 1 [ cs .C L ] 1 0 Fe b 20 20 Recent work on unsupervised text style transfer mostly employs non-generative or non-probabilistic modeling approaches. For example, Shen et al. (2017) and Yang et al. (2018) design adversarial discriminators to shape their unsupervised objective – an approach that can be effective, but often introduces training instability. Other work focuses on directly designing unsupervised training objectives by incorporating intuitive loss terms (e.g. backtranslation loss), and demonstrates state-ofthe-art performance on unsupervised machine translation (Lample et al., 2018; Artetxe et al., 2019) and style transfer (Lample et al., 2019). However, the space of possible unsupervised objectives is extremely large and the underlying modeling assumptions defined by each objective can only be reasoned about indirectly. As a result, the process of designing such systems is often heuristic. In contrast, probabilistic models (e.g. the noisy channel model (Shannon, 1948)) define assumptions about data more explicitly and allow us to reason about these assumptions during system design. Further, the corresponding objectives are determined naturally by principles of probabilistic inference, reducing the need for empirical search directly in the space of possible objectives. That said, classical probabilistic models for unsupervised sequence transduction (e.g. the HMM or semi-HMM) typically enforce overly strong independence assumptions about data to make exact inference tractable (Knight et al., 2006; Ravi & Knight, 2011; Pourdamghani & Knight, 2017). This has restricted their development and caused their performance to lag behind unsupervised neural objectives on complex tasks. Luckily, in recent years, powerful variational approximation techniques have made it more practical to train probabilistic models without strong independence assumptions (Miao & Blunsom, 2016; Yin et al., 2018). Inspired by this, we take a new approach to unsupervised style transfer. We directly define a generative probabilistic model that treats a non-parallel corpus in two domains as a partially observed parallel corpus. Our model makes few independence assumptions and its true posterior is intractable. However, we show that by using amortized variational inference (Kingma & Welling, 2013), a principled probabilistic technique, a natural unsupervised objective falls out of our modeling approach that has many connections with past work, yet is different from all past work in specific ways. In experiments across a suite of unsupervised text style transfer tasks, we find that the natural objective of our model actually outperforms all manually defined unsupervised objectives from past work, supporting the notion that probabilistic principles can be a useful guide even in deep neural systems. Further, in the case of unsupervised machine translation, our model matches the current state-of-the-art non-generative approach. 2 UNSUPERVISED TEXT STYLE TRANSFER We first overview text style transfer, which aims to transfer a text (typically a single sentence or a short paragraph – for simplicity we refer to simply “sentences” below) from one domain to another while preserving underlying content. For example, formality transfer (Rao & Tetreault, 2018) is the task of transforming the tone of text from informal to formal without changing its content. Other examples include sentiment transfer (Shen et al., 2017), word decipherment (Knight et al., 2006), and author imitation (Xu et al., 2012). If parallel examples were available from each domain (i.e. the training data is a bitext consisting of pairs of sentences from each domain), supervised techniques could be used to perform style transfer (e.g. attentional Seq2Seq (Bahdanau et al., 2015) and Transformer (Vaswani et al., 2017)). However, for most style transfer problems, only non-parallel corpora (one corpus from each domain) can be easily collected. Thus, work on style transfer typically focuses on the more difficult unsupervised setting where systems must learn from non-parallel data alone. The model we propose treats an observed non-parallel text corpus as a partially observed parallel corpus. Thus, we introduce notation for both observed text inputs and those that we will treat as latent variables. Specifically, we let X = {x(1), x(2), · · · , x(m)} represent observed data from domain D1, while we let Y = {y(m+1), y(m+2), · · · , y(n)} represent observed data from domain D2. Corresponding indices represent parallel sentences. Thus, none of the observed sentences share indices. In our model, we introduce latent sentences to complete the parallel corpus. Specifically, X̄ = {x̄(m+1), x̄(m+2), · · · , x̄(n)} represents the set of latent parallel sentences in D1, while Ȳ = {ȳ(1), ȳ(2), · · · , ȳ(m)} represents the set of latent parallel sentences in D2. Then the goal of unsupervised text transduction is to infer these latent variables conditioned the observed non-parallel corpora; that is, to learn p(ȳ|x) and p(x̄|y). 3 THE DEEP LATENT SEQUENCE MODEL First we present our generative model of bitext, which we refer to as a deep latent sequence model. We then describe unsupervised learning and inference techniques for this model class. 3.1 MODEL STRUCTURE Directly modeling p(ȳ|x) and p(x̄|y) in the unsupervised setting is difficult because we never directly observe parallel data. Instead, we propose a generative model of the complete data that defines a joint likelihood, p(X, X̄, Y, Ȳ ). In order to perform text transduction, the unobserved halves can be treated as latent variables: they will be marginalized out during learning and inferred via posterior inference at test time. Our model assumes that each observed sentence is generated from an unobserved parallel sentence in the opposite domain, as depicted in Figure 1. Specifically, each sentence x(i) in domain D1 is generated as follows: First, a latent sentence ȳ(i) in domain D2 is sampled from a prior, pD2(ȳ(i)). Then, x(i) is sampled conditioned on ȳ(i) from a transduction model, p(x(i)|ȳ(i)). Similarly, each observed sentence y(j) in domain D2 is generated conditioned on a latent sentence, x̄(j), in domain D1 via the opposite transduction model, p(y(j)|x̄(j)), and prior, pD1(x̄(j)). We let θx|ȳ and θy|x̄ represent the parameters of the two transduction distributions respectively. We assume the prior distributions are pretrained on the observed data in their respective domains and therefore omit their parameters for simplicity of notation. Together, this gives the following joint likelihood: p(X, X̄, Y, Ȳ ; θx|ȳ, θy|x̄) = ( m∏ i=1 p ( x(i)|ȳ(i); θx|ȳ ) pD2 ( ȳ(i) ))( n∏ j=m+1 p ( y(j)|x̄(j); θy|x̄ ) pD1 ( x̄(j) )) (1) The log marginal likelihood of the data, which we will approximate during training, is: log p(X,Y ; θx|ȳ, θy|x̄) = log ∑ X̄ ∑ Ȳ p(X, X̄, Y, Ȳ ; θx|ȳ, θy|x̄) (2) Note that if the two transduction models share no parameters, the training problems for each observed domain are independent. Critically, we introduce parameter sharing through our variational inference procedure, which we describe in more detail in Section 3.2. Architecture: Since we would like to be able to model a variety of transfer tasks, we choose a parameterization for our transduction distributions that makes no independence assumptions. Specifically, we employ an encoder-decoder architecture based on the standard attentional Seq2Seq model which has been shown to be successful across various tasks (Bahdanau et al., 2015; Rush et al., 2015). Similarly, our prior distributions for each domain are parameterized as recurrent language models which, again, make no independence assumptions. In contrast, traditional unsupervised generative sequence models typically make strong independence assumptions to enable exact inference (e.g. the HMM makes a Markov assumption on the latent sequence and emissions are one-to-one). Our model is more flexible, but exact inference via dynamic programming will be intractable. We address this problem in the next section. 3.2 LEARNING Ideally, learning should directly optimize the log data likelihood, which is the marginal of our model shown in Eq. 2. However, due to our model’s neural parameterization which does not factorize, computing the data likelihood cannot be accomplished using dynamic programming as can be done with simpler models like the HMM. To overcome the intractability of computing the true data likelihood, we adopt amortized variational inference (Kingma & Welling, 2013) in order to derive a surrogate objective for learning, the evidence lower bound (ELBO) on log marginal likelihood3 : log p(X,Y ; θx|ȳ, θy|x̄) ≥ LELBO(X,Y ; θx|ȳ, θy|x̄, φx̄|y, φȳ|x) = ∑ i [ Eq(ȳ|x(i);φȳ|x)[log p(x (i)|ȳ; θx|ȳ)]−DKL ( q(ȳ|x(i);φȳ|x)||pD2(ȳ) )] + ∑ j [ Eq(x̄|y(j);φx̄|y)[log p(y (j)|x̄; θy|x̄)]︸ ︷︷ ︸ Reconstruction likelihood −DKL ( q(x̄|y(j);φx̄|y)||pD1(x̄) )] ︸ ︷︷ ︸ KL regularizer (3) The surrogate objective introduces q(ȳ|x(i);φȳ|x) and q(x̄|y(j);φx̄|y), which represent two separate inference network distributions that approximate the model’s true posteriors, p(ȳ|x(i); θx|ȳ) and p(x̄|y(j); θy|x̄), respectively. Learning operates by jointly optimizing the lower bound over both variational and model parameters. Once trained, the variational posterior distributions can be used directly for style transfer. The KL terms in Eq. 3, that appear naturally in the ELBO objective, can be intuitively viewed as regularizers that use the language model priors to bias the induced sentences towards the desired domains. Amortized variational techniques have been most commonly applied to continuous latent variables, as in the case of the variational autoencoder (VAE) (Kingma & Welling, 2013). Here, we use this approach for inference over discrete sequences, which has been shown to be effective in related work on a semi-supervised task (Miao & Blunsom, 2016). Inference Network and Parameter Sharing: Note that the approximate posterior on one domain aims to learn the reverse style transfer distribution, which is exactly the goal of the generative distribution in the opposite domain. For example, the inference network q(ȳ|x(i);φȳ|x) and the generative distribution p(y|x̄(i); θy|x̄) both aim to transform D1 to D2. Therefore, we use the same architecture for each inference network as used in the transduction models, and tie their parameters: φx̄|y = θx|ȳ, φȳ|x = θy|x̄. This means we learn only two encoder-decoders overall – which are parameterized by θx|ȳ and θy|x̄ respectively – to represent two directions of transfer. In addition to reducing the number of learnable parameters, this parameter tying couples the learning problems for both domains and allows us to jointly learn from the full data. Moreover, inspired by recent work that 3Note that in practice, we add a weight λ (the same to both domains) to the KL term in ELBO since the regularization strength from the pretrained language model varies depending on the datasets, training data size, or language model structures. Such reweighting has proven necessary in previous work that is trained with ELBO (Bowman et al., 2016; Miao & Blunsom, 2016; Yin et al., 2018). builds a universal Seq2Seq model to translate between different language pairs (Johnson et al., 2017), we introduce further parameter tying between the two directions of transduction: the same encoder is employed for both x and y, and a domain embedding c is provided to the same decoder to specify the transfer direction, as shown in Figure 2. Ablation analysis in Section 5.3 suggests that parameter sharing is important to achieve good performance. Approximating Gradients of ELBO: The reconstruction and KL terms in Eq. 3 still involve intractable expectations due to the marginalization over the latent sequence, thus we need to approximate their gradients. Gumbel-softmax (Jang et al., 2017) and REINFORCE (Sutton et al., 2000) are often used as stochastic gradient estimators in the discrete case. Since the latent text variables have an extremely large domain, we find that REINFORCE-based gradient estimates result in high variance. Thus, we use the Gumbel-softmax straight-through estimator to backpropagate gradients from the KL terms.4 However, we find that approximating gradients of the reconstruction loss is much more challenging – both the Gumbel-softmax estimator and REINFORCE are unable to outperform a simple stop-gradient method that does not back-propagate the gradient of the latent sequence to the inference network. This confirms a similar observation in previous work on unsupervised machine translation (Lample et al., 2018). Therefore, we use greedy decoding without recording gradients to approximate the reconstruction term.5 Note that the inference networks still receive gradients from the prior through the KL term, and their parameters are shared with the decoders which do receive gradients from reconstruction. We consider this to be the best empirical compromise at the moment. Initialization. Good initialization is often necessary for successful optimization of unsupervised learning objectives. In preliminary experiments, we find that the encoder-decoder structure has difficulty generating realistic sentences during the initial stages of training, which usually results in a disastrous local optimum. This is mainly because the encoder-decoder is initialized randomly and there is no direct training signal to specify the desired latent sequence in the unsupervised setting. Therefore, we apply a self-reconstruction loss Lrec at the initial epochs of training. We denote the output the encoder as e(·) and the decoder distribution as pdec, then Lrec = −α · ∑ i [pdec(e(x (i), cx)]− α · ∑ j [pdec(e(y (j), cy)], (4) α decays from 1.0 to 0.0 linearly in the first k epochs. k is a tunable parameter and usually less than 3 in all our experiments. 4 CONNECTION TO RELATED WORK Our probabilistic formulation can be connected with recent advances in unsupervised text transduction methods. For example, back translation loss (Sennrich et al., 2016) plays an important role in recent unsupervised machine translation (Artetxe et al., 2018; Lample et al., 2018; Artetxe et al., 2019) and unsupervised style transfer systems (Lample et al., 2019). In order to incorporate back translation loss the source language x is translated to the target language y to form a pseudo-parallel corpus, then a translation model from y to x can be learned on this pseudo bitext just as in supervised setting. While back translation was often explained as a data augmentation technique, in our probabilistic formulation it appears naturally with the ELBO objective as the reconstruction loss term. Some previous work has incorporated a pretrained language models into neural semi-supervised or unsupervised objectives. He et al. (2016) uses the log likelihood of a pretrained language model as the reward to update a supervised machine translation system with policy gradient. Artetxe et al. (2019) utilize a similar idea for unsupervised machine translation. Yang et al. (2018) employed a similar approach, but interpret the LM as an adversary, training the generator to fool the LM. We show how our ELBO objective is connected with these more heuristic LM regularizers by expanding the KL loss term (assume x is observed): DKL(q(ȳ|x)||pD2(ȳ)) = −Hq − Eq[log pD2(ȳ)], (5) Note that the loss used in previous work does not include the negative entropy term, −Hq. Our objective results in this additional “regularizer”, the negative entropy of the transduction distribution, −Hq. Intuitively, −Hq helps avoid a peaked transduction distribution, preventing the transduction 4We use one sample to approximate the expectations. 5We compare greedy and sampling decoding in Section 5.3. from constantly generating similar sentences to satisfy the language model. In experiments we will show that this additional regularization is important and helps bypass bad local optima and improve performance. These important differences with past work suggest that a probabilistic view of the unsupervised sequence transduction may provide helpful guidance in determining effective training objectives. 5 EXPERIMENTS We test our model on five style transfer tasks: sentiment transfer, word substitution decipherment, formality transfer, author imitation, and related language translation. For completeness, we also evaluate on the task of general unsupervised machine translation using standard benchmarks. We compare with the unsupervised machine translation model (UNMT) which recently demonstrated state-of-the-art performance on transfer tasks such as sentiment and gender transfer (Lample et al., 2019).6 To validate the effect of the negative entropy term in the KL loss term Eq. 5, we remove it and train the model with a back-translation loss plus a language model negative log likelihood loss (which we denote as BT+NLL) as an ablation baseline. For each task, we also include strong baseline numbers from related work if available. For our method we select the model with the best validation ELBO, and for UNMT or BT+NLL we select the model with the best back-translation loss. Complete model configurations and hyperparameters can be found in Appendix A.1. 5.1 DATASETS AND EXPERIMENT SETUP Word Substitution Decipherment. Word decipherment aims to uncover the plain text behind a corpus that was enciphered via word substitution where word in the vocabulary is mapped to a unique type in a cipher dictionary (Dou & Knight, 2012; Shen et al., 2017; Yang et al., 2018). In our formulation, the model is presented with a non-parallel corpus of English plaintext and the ciphertext. We use the data in (Yang et al., 2018) which provides 200K sentences from each domain. While previous work (Shen et al., 2017; Yang et al., 2018) controls the difficulty of this task by varying the percentage of words that are ciphered, we directly evaluate on the most difficult version of this task – 100% of the words are enciphered (i.e. no vocabulary sharing in the two domains). We select the model with the best unsupervised reconstruction loss, and evaluate with BLEU score on the test set which contains 100K parallel sentences. Results are shown in Table 2. Sentiment Transfer. Sentiment transfer is a task of paraphrasing a sentence with a different sentiment while preserving the original content. Evaluation of sentiment transfer is difficult and is still an open research problem (Mir et al., 2019). Evaluation focuses on three aspects: attribute control, content preservation, and fluency. A successful system needs to perform well with respect to all three aspects. We follow prior work by using three automatic metrics (Yang et al., 2018; Lample et al., 2019): classification accuracy, self-BLEU (BLEU of the output with the original sentence as the reference), and the perplexity (PPL) of each system’s output under an external language model. We pretrain a convolutional classifier (Kim, 2014) to assess classification accuracy, and use an LSTM language model pretrained on each domain to compute the PPL of system outputs. We use the Yelp reviews dataset collected by Shen et al. (2017) which contains 250K negative sentences and 380K positive sentences. We also use a small test set that has 1000 human-annotated parallel sentences introduced in Li et al. (2018). We denote the positive sentiment as domain D1 and the negative sentiment as domain D2. We use Self-BLEU and BLEU to represent the BLEU score of the output against the original sentence and the reference respectively. Results are shown in Table 1. Formality Transfer. Next, we consider a harder task of modifying the formality of a sequence. We use the GYAFC dataset (Rao & Tetreault, 2018), which contains formal and informal sentences from two different domains. In this paper, we use the Entertainment and Music domain, which has about 52K training sentences, 5K development sentences, and 2.5K test sentences. This dataset actually contains parallel data between formal and informal sentences, which we use only for evaluation. We follow the evaluation of sentiment transfer task and test models on three axes. Since the test set is 6The model they used is slightly different from the original model of Lample et al. (2018) in certain details – e.g. the addition of a pooling layer after attention. We re-implement their model in our codebase for fair comparison and verify that our re-implementation achieves performance competitive with the original paper. a parallel corpus, we only compute reference BLEU and ignore self-BLEU. We use D1 to denote formal text, and D2 to denote informal text. Results are shown in Table 1. Author Imitation. Author imitation is the task of paraphrasing a sentence to match another author’s style. The dataset we use is a collection of Shakespeare’s plays translated line by line into modern English. It was collected by Xu et al. (2012)7 and used in prior work on supervised style transfer (Jhamtani et al., 2017). This is a parallel corpus and thus we follow the setting in the formality transfer task. We use D1 to denote modern English, and D2 to denote Shakespeare-style English. Results are shown in Table 1. Related Language Translation. Next, we test our method on a challenging related language translation task (Pourdamghani & Knight, 2017; Yang et al., 2018). This task is a natural test bed for unsupervised sequence transduction since the goal is to preserve the meaning of the source sentence while rewriting it into the target language. For our experiments, we choose Bosnian (bs) and Serbian (sr) as the related language pairs. We follow Yang et al. (2018) to report BLEU-1 score on this task since BLEU-4 score is close to zero. Results are shown in Table 2. Unsupervised MT. In order to draw connections with a related work on general unsupervised machine translation, we also evaluate on the WMT’16 German English translation task. This task is substantially more difficult than the style transfer tasks considered so far. We compare with the state-of-the-art UNMT system using the existing implementation from the XLM codebase,8 and implement our approach in the same framework with XLM initialization for fair comparison. We train both systems on 5M non-parallel sentences from each language. Results are shown in Table 2. In Tables 1 we also list the PPL of the test set under the external LM for both the source and target domain. PPL of system outputs should be compared to PPL of the test set itself because extremely low PPL often indicates that the generated sentences are short or trivial. 5.2 RESULTS Tables 1 and 2 demonstrate some general trends. First, UNMT is able to outperform other prior methods in unsupervised text style transfer, such as (Yang et al., 2018; Hu et al., 2017; Shen et al., 2017). The performance improvements of UNMT indicate that flexible and powerful 7https://github.com/tokestermw/tensorflow-shakespeare 8https://github.com/facebookresearch/XLM architectures are crucial (prior methods generally do not have an attention mechanism). Second, our model achieves comparable classification accuracy to UNMT but outperforms it in all style transfer tasks in terms of the reference-BLEU, which is the most important metric since it directly measures the quality of the final generations against gold parallel data. This indicates that our method is both effective and consistent across many different tasks. Finally, the BT+NLL baseline is sometimes quite competitive, which indicates that the addition of a language model alone can be beneficial. However, our method consistently outperforms the simple BT+NLL method, which indicates the effectiveness of the additional entropy regularizer in Eq. 5 that is the byproduct of our probabilistic formulation. Next, we examine the PPL of the system outputs under pretrained domain LMs, which should be evaluated in comparison with the PPL of the test set itself. For both the sentiment transfer and the formality transfer tasks in Table 1, BT+NLL achieves extremely low PPL, lower than the PPL of the test corpus in the target domain. After a close examination of the output, we find that it contains many repeated and overly simple outputs. For example, the system generates many examples of “I love this place” when transferring negative to positive sentiment (see Appendix A.3 for examples). It is not surprising that such a trivial output has low perplexity, high accuracy, and low BLEU score. On the other hand, our system obtains reasonably competitive PPL, and our approach achieves the highest accuracy and higher BLEU score than the UNMT baseline. 5.3 FURTHER ABLATIONS AND ANALYSIS Parameter Sharing. We also conducted an experiment on the word substitution decipherment task, where we remove parameter sharing (as explained in Section 3.2) between two directions of transduction distributions, and optimize two encoder-decoder instead. We found that the model only obtained an extremely low BLEU score and failed to generate any meaningful outputs. Performance vs. Domain Divergence. Figure 3 plots the relative improvement of our method over UNMT with respect to accuracy of a naive Bayes’ classifier trained to predict the domain of test sentences. Tasks with high classification accuracy likely have more divergent domains. We can see that for decipherment and en-de translation, where the domains have different vocabularies and thus are easily distinguished, our method yields a smaller gain over UNMT. This likely indicates that the (discrimination) regularization effect of the LM priors is less important or necessary when the two domains are very different. Why does the proposed model outperform UNMT? Finally, we examine in detail the output of our model and UNMT for the author imitation task. We pick this task because the reference outputs for the test set are provided, aiding analysis. Examples shown in Table 3 demonstrate that UNMT tends to make overly large changes to the source so that the original meaning is lost, while our method is better at preserving the content of the source sentence. Next, we quantitatively examine the outputs from UNMT and our method by comparing the F1 measure of words bucketed by their syntactic tags. We use the open-sourced compare-mt tool (Neubig et al., 2019), and the results are shown in Figure 4. Our system has outperforms UNMT in all word categories. In particular, our system is much better at generating nouns, which likely leads to better content preservation. CC DT IN J J NN NN P NN S PR P RB TO VB VB P VB Z ot he r labels 0.0 0.1 0.2 0.3 0.4 0.5 0.6 fm ea s UNMT Ours Figure 4: Word F1 score by POS tag. Table 3: Examples for author imitation task Methods Shakespeare to Modern Source Not to his father’s . Reference Not to his father’s house . UNMT Not to his brother . Ours Not to his father’s house . Source Send thy man away . Reference Send your man away . UNMT Send an excellent word . Ours Send your man away . Source Why should you fall into so deep an O ? Reference Why should you fall into so deep a moan ? UNMT Why should you carry so nicely , but have your legs ? Ours Why should you fall into so deep a sin ? Greedy vs. Sample-based Gradient Approximation. In our experiments, we use greedy decoding from the inference network to approximate the expectation required by ELBO, which is a biased estimator. The main purpose of this approach is to reduce the variance of the gradient estimator during training, especially in the early stages when the variance of sample-based approaches is quite high. As an ablation experiment on the sentiment transfer task we compare greedy and sample-based gradient approximations in terms of both train and test ELBO, as well as task performance corresponding to best test ELBO. After the model is fully trained, we find that the sample-based approximation has low variance. With a single sample, the standard deviation of the EBLO is less than 0.3 across 10 different test repetitions. All final reported ELBO values are all computed with this approach, regardless of whether the greedy approximation was used during training. The reported ELBO values are the evidence lower bound per word. Results are shown in Table 4, where the sampling-based training underperforms on both ELBO and task evaluations. 5.4 COMPARISON OF GRADIENT PROPAGATION METHODS As noted above, to stabilize the training process, we stop gradients from propagating to the inference network from the reconstruction loss. Does this approach indeed better optimize the actual probabilistic objective (i.e. ELBO) or only indirectly lead to improved task evaluations? In this section we use sentiment transfer as an example task to compare different methods for propagating gradients and evaluate both ELBO and task evaluations. Specifically, we compare three different methods: • Stop Gradient: The gradients from reconstruction loss are not propagated to the inference network. This is the method we use in all previous experiments. • Gumbel Softmax (Jang et al., 2017): Gradients from the reconstruction loss are propagated to the inference network with the straight-through Gumbel estimator. • REINFORCE (Sutton et al., 2000): Gradients from reconstruction loss are propagated to the inference network with ELBO as a reward function. This method has been used in previous work for semi-supervised sequence generation (Miao & Blunsom, 2016; Yin et al., 2018), but often suffers from instability issues. We report the train and test ELBO along with task evaluations in Table 5, and plot the learning curves on validation set in Figure 5.9 While being much simpler, we show that the stop-gradient trick produces superior ELBO over Gumbel Softmax and REINFORCE. This result suggests that stopping gradient helps better optimize the likelihood objective under our probabilistic formulation in comparison with other optimization techniques that propagate gradients, which is counter-intuitive. A likely explanation is that as a gradient estimator, while clearly biased, stop-gradient has substantially reduced variance. In comparison with other techniques that offer reduced bias but extremely high variance when applied to our model class (which involves discrete sequences as latent variables), stop-gradient actually leads to better optimization of our objective because it achieves better balance of bias and variance overall. 9We remove REINFORCE from this figure since it is very difficult to stabilize training and obtain reasonable results (e.g. the ELBO value is much worse than others in Table 5) 6 CONCLUSION We propose a probabilistic generative forumalation that unites past work on unsupervised text style transfer. We show that this probabilistic formulation provides a different way to reason about unsupervised objectives in this domain. Our model leads to substantial improvements on five text style transfer tasks, yielding bigger gains when the styles considered are more difficult to distinguish. ACKNOWLEDGEMENT The work of Junxian He and Xinyi Wang is supported by the DARPA GAILA project (award HR00111990063) and the Tang Family Foundation respectively. The authors would like to thank Zichao Yang for helpful feedback about the project. A APPENDIX A.1 MODEL CONFIGURATIONS. We adopt the following attentional encoder-decoder architecture for UNMT, BT+NLL, and our method across all the experiments: • We use word embeddings of size 128. • We use 1 layer LSTM with hidden size of 512 as both the encoder and decoder. • We apply dropout to the readout states before softmax with a rate of 0.3. • Following Lample et al. (2019), we add a max pooling operation over the encoder hidden states before feeding it to the decoder. Intuitively the pooling window size would control how much information is preserved during transduction. A window size of 1 is equivalent to standard attention mechanism, and a large window size corresponds to no attention. See Appendix A.2 for how to select the window size. • There is a noise function for UNMT baseline in its denoising autoencoder loss (Lample et al., 2017; 2019), which is critical for its success. We use the default noise function and noise hyperparameters in Lample et al. (2017) when running the UNMT model. For BT+NLL and our method we found that adding the extra noise into the self-reconstruction loss (Eq. 4) is only helpful when the two domains are relatively divergent (decipherment and related language translation tasks) where the language models play a less important role. Therefore, we add the default noise from UNMT to Eq. 4 for decipherment and related language translation tasks only, and do not use any noise for sentiment, author imitation, and formality tasks. A.2 HYPERPARAMETER TUNING. We vary pooling windows size as {1, 5}, the decaying patience hyperparameter k for selfreconstruction loss (Eq. 4) as {1, 2, 3}. For the baseliens UNMT and BT+NLL, we also try the option of not annealing the self-reconstruction loss at all as in the unsupervised machine translation task (Lample et al., 2018). We vary the weight λ for the NLL term (BT+NLL) or the KL term (our method) as {0.001, 0.01, 0.03, 0.05, 0.1}. A.3 SENTIMENT TRANSFER EXAMPLE OUTPUTS We list some examples of the sentiment transfer task in Table 6. Notably, the BT+NLL method tends to produce extremely short and simple sentences. A.4 REPETITIVE EXAMPLES OF BT+NLL In Section 5 we mentioned that the baseline BT+NLL has a low perplexity for some tasks because it tends to generate overly simple and repetitive sentences. From Table 1 we see that two representative tasks are sentiment transfer and formatliy transfer. In Appendix A.3 we have demonstrated some examples for sentiment transfer, next we show some repetitive samples of BT+NLL in Table 7.
1. What is the focus of the paper regarding text style transfer? 2. What are the strengths of the proposed approach, particularly in comparison to prior works? 3. Do you have any questions or concerns about the encoder-decoder model used in the paper? 4. Can you provide more information or explanations regarding the baseline of BT+NLL and its effectiveness? 5. Are there any specific examples or cases where the BT-NLL model performed poorly, and what might be the reasons behind it?
Review
Review Summary: This paper introduces a probabilistic generative model for unsupervised style transfer of text. The approach introduced in the paper does not require paired training data. An encoder-decoder model is trained to transfer text from one style to another and back. Review: This work is very well-written and easy to follow. The contribution is clearly articulated as while there are probabilistic generative models for transfer in the literature (Shen et al does include one) they don't perform as well. Ablation studies further confirm the need for the particular kind of parameter sharing used in the model in the paper. Great results are shown on 5 text transfer problems. Clarifications and improvements: Just for clarity, in the last paragraph on page 4. It says two encoder-decoder models are learnt, but isn't the idea that there is effectively only one encoder and one decoder learned that just put together in different ways during training? I'm also curious why the baseline of BT+NLL was so strong? Is having the loss of a language model work that much better than the regular entropy term? I would also like if possible if you could share some of the repetitive examples created by BT-NLL which explain its low PPL.
ICLR
Title Mid-Vision Feedback Abstract Feedback plays a prominent role in biological vision, where perception is modulated based on agents’ evolving expectations and world model. We introduce a novel mechanism which modulates perception based on high level categorical expectations: Mid-Vision Feedback (MVF). MVF associates high level contexts with linear transformations. When a context is ”expected” its associated linear transformation is applied over feature vectors in a mid level of a network. The result is that mid-level network representations are biased towards conformance with high level expectations, improving overall accuracy and contextual consistency. Additionally, during training mid-level feature vectors are biased through introduction of a loss term which increases the distance between feature vectors associated with different contexts. MVF is agnostic as to the source of contextual expectations, and can serve as a mechanism for top down integration of symbolic systems with deep vision architectures. We show the superior performance of MVF to post-hoc filtering for incorporation of contextual knowledge, and show superior performance of configurations using predicted context (when no context is known a priori) over configurations with no context awareness. 1 N/A Feedback plays a prominent role in biological vision, where perception is modulated based on agents’ evolving expectations and world model. We introduce a novel mechanism which modulates perception based on high level categorical expectations: Mid-Vision Feedback (MVF). MVF associates high level contexts with linear transformations. When a context is ”expected” its associated linear transformation is applied over feature vectors in a mid level of a network. The result is that mid-level network representations are biased towards conformance with high level expectations, improving overall accuracy and contextual consistency. Additionally, during training mid-level feature vectors are biased through introduction of a loss term which increases the distance between feature vectors associated with different contexts. MVF is agnostic as to the source of contextual expectations, and can serve as a mechanism for top down integration of symbolic systems with deep vision architectures. We show the superior performance of MVF to post-hoc filtering for incorporation of contextual knowledge, and show superior performance of configurations using predicted context (when no context is known a priori) over configurations with no context awareness. 1 1 INTRODUCTION In most contemporary computer vision architectures information flows in a single direction: from low-level of pixels up to high level abstract concepts (e.g., object categories) - such architectures are termed feed-forward architectures. In general, each successive layer of the network contains more abstract representations than the previous, and the representational hierarchy mirrors the architectural hierarchy. It is also possible to introduce top-down connections into the network architecture, introducing high level information into processes involving lower levels of abstraction in a process of feedback. Feedback plays a primary role in biological vision; in fact, the majority of neural connections in the visual cortex are top-down, rather than bottom-up, connections (Markov et al., 2014). These topdown connections are thought to convey information of higher level expectation, and neurons of the visual cortex use both higher level expectation as well as lower level visual information in producing their representations. Expectations in biological systems arise from continuous engagement with the environment. In Computer Vision, this is reflected in the paradigm of Active Vision (Bajcsy, 1988; Fermüller & Aloimonos, 1995), where perception is framed as an active problem involving evolving world models. The task of producing mid-level visual representations Teo et al. (2015a;b); Xu et al. (2012); Nishigaki et al. (2012) from low level input is under-constrained - many plausible mid-level interpretations may be consistent with input. To give an intuition for how understanding of context can impact perception of mid-level features consider Figure 1 - characteristics of shrews and kiwi differ, but may be similar enough to be confused without context. Top-down feedback - from high level context to mid-level visual features - provides a “map” for mid-level processes, constraining it towards high level consistency. 1Code will be available at: https://github.com/maynord/Mid-Vision-Feedback Introduction of contextual knowledge through feedback is superior to post-hoc application of contextual knowledge, e.g. through discarding interpretations (classifications, here) which are not context consistent. We demonstrate this point empirically. Interpretations selected after post-hoc filtering for context consistency will still be built upon underconstrained mid-level features. Furthermore, in contrast to post-hoc filtering, feedback naturally allows for detection of out-of-context objects, as feedback functions through biasing of visual representations rather than filtering. It is valuable for methods to allow for out-of-context detections, even when biasing against them, as out-of-context objects on occasion appear (e.g., a tree in an office setting). CNNs have a natural tendency towards decoupled representations - representations with a tendency for feature vector angle to correspond to characteristic type (e.g., ”fuzzy”), and for feature vector magnitude to correspond to characteristic variation or degree (Liu et al., 2018) (e.g., ”very fuzzy” / ”not fuzzy”) (See Figure 2 for an illustration). This opens up a couple of possibilities in terms of directly manipulating feature representations: 1) We can differentiate between axes with different associations to high level contexts, 2) we can control magnitudes of characteristics through amplifying and dampening axes associated with those characteristics. That is, w.r.t. point #1, as CNNs produce representations which are, to a degree, separated by angle, certain axes will be more associated with some higher level contexts over others. Also, w.r.t. point #2, amplifying characteristics associated with a higher level context increases the likelihood of interpreting input as conforming to that context; dampening characteristics associated with that context reduces the likelihood of interpreting input as conforming to that context. We present a principled method to feedback - Mid-Vision Feedback (MVF), illustrated in Figure 3 - allowing the biasing of mid-level feature representations in networks such as CNNs towards conformance with high level categorical expectations. This approach is comprised of two components: 1) linear transforms (affine transformations), and 2) orthogonalization bias. Affine transformations enable direct control over the feature vectors at the injection level - the level into which feedback is being inserted. If these vectors have been trained with a bias towards orthogonality w.r.t. contexts, then this allows for affine transformations to manipulate features associated with context presence or absence with less impact on other features. The orthogonalization bias is introduced to increase the independence between contexts, so they can be manipulated with less interference. This bias is introduced at the injection level. This does not negatively impact the representational power or performance of the base network. This orthogonality bias is introduced across contexts - e.g., mid-level feature vectors associated with animate contexts can be biased towards orthogonality w.r.t. mid-level features associated with inanimate contexts. Due to the resulting greater angular separation between features associated with different contexts, this biasing allows greater control over facets of mid-level representation which are meaningful to higher level contexts. MVF then functions as follows. During runtime a high level context expectation is associated with input. This expectation is used in biasing mid-level visual features through use of an affine transformation associated with the context of that expectation. This selects for characteristics associated with this context. These affine transformations are better enabled as a consequence of the disentanglement of such characteristics at the injection level, effected through introduction of the orthogonalization bias during training. Feedback then enables a synergy between high level categorical interpretations and mid-level visual feature representations, bridging the signal-symbol gap in both directions. This approach to incorporation of context expectations is controlled. This differs from an approach of connecting the upper level fully connected layers of a network directly to lower level convolutional levels in a scheme which includes neither categorical representations nor biasing w.r.t. said categorical representations. MVF both employs feedback from categorical knowledge and is agnostic w.r.t. the source of that categorical knowledge - i.e., it is not a requirement that context expectations be produced from the same network. As a consequence of this, MVF allows for interfacing with larger symbolic systems - e.g., models of scenes employing graphical models over scene elements and categories. This topdown synergy across the signal-symbol gap opens up a wide range of applications. The rest of this paper is structured as follows: In Section 2 we cover related work; in Section 3 we detail methods; in Section 4 we cover experiments; and, in Section 5 we conclude. 2 RELATED WORK 2.1 BIOLOGY, FEEDBACK, AND PARALLELS TO COMPUTER VISION Previous works ( (Markov et al., 2014), (Gilbert & Sigman, 2007), Kreiman & Serre (2020), (Gilbert & Li, 2013), and (Paneri & Gregoriou, 2017)) have explored the importance of feedback connections in biological sensory perception. Further work ((Liao & Poggio, 2016) and (van Bergen & Kriegeskorte, 2020)) draw connections between feedback in computer vision architectures and the primate visual cortex. (Tang et al., 2018) show that feedforward CNNs are not robust to occlusion, unlike in human perception, but that adding recurrence improves occlusion robustness. (Lotter et al., 2016) introduce PredNet, a network based on predictive coding, and demonstrate benefits on the task of self-supervised frame prediction. There is good reason to believe that modeling characteristics of biological vision in computer vision architectures will benefit computer vision (e.g., (Medathati et al., 2016; Teo et al., 2015c)). For example, (Linsley et al., 2020b) demonstrate a network with top-down connections which aligns with human perception of visual illusions, where feedback aids in prioritizing object boundary contours over simple edge contours. (Linsley et al., 2020a) shows how recurrent hierarchical feedback model can improve segmentation. (Konkle & Alvarez, 2020) introduce instance-prototype contrastive learning, and show that self-supervised models can learn representations which are more brain-like than supervised models. (Li et al., 2021) introduce Contrastive Clustering, showing a benefit to instance- as well as cluster-level contrastive loss in clustering. (Long et al., 2018) demonstrate large scale organization of the cortex based on mid-level visual features (below the level of object recognition), including those associated with animacy vs. inanimacy. (Jagadeesh & Gardner, 2022) argue that representations in category selective regions of the visual cortex encode a basis representation for texture, rather than objecthood representations. (Harrington & Deza, 2021) demonstrate that constraining networks to be robust to adversarial input produces network representations more in-line with human visual perception, and argue for the use of texture summary statistic representations. 2.2 FEEDBACK IN EXISTING COMPUTER VISION METHODS The conventional approach to feedback in computer vision is the use of recurrent connections. (Caswell et al., 2016) introduce recurrent connections into shallow CNN architecture for image classification. (Pinheiro & Collobert, 2014) employ recurrency over convolutions for the purpose of enabling lateral information flow in the task of segmentation. (Zamir et al., 2016) instantiate feedback through an RNN architecture which iteratively refines prediction categories from coarse to specific. Alternatives to conventional recurrent connections for feedback include (Hu & Ramanan, 2016), which explore convolutions with hierarchical rectified Gaussians to enable top-down as well as bottom-up information flow, and apply them to the task of keypoint localization under occlusion. Additionally, (Yao et al., 2012) apply a graphical model over scene representations, allowing higher and lower level decisions to influence each other. 3 METHODS With MVF we seek a feedback mechanism which allows us to directly bias lower level feature representations based on categorical higher level context expectations. This involves top-down interaction across two levels of abstraction: 1) high level contexts ci ∈ C, 2) mid-level features fi ∈ F . The structure of MVF is illustrated in Figure 3, and the loss and training formulation given in Section 3.1. Context expectations sit at a level of abstraction above the classes of network output, and are used in selecting affine transformations placed above the output of injection levels. When applied to injection level output, the affine transformations bias injection level feature representations towards conformance with the associated context expectation, as illustrated in Figure 4. Injection level output is made more amenable to manipulation according to context through introduction of a contrastive loss LO, introducing a bias towards orthogonalization across context. Affine transformations are applied over the features of the injection level for the purpose of amplifying or dampening certain characteristics. This process aligns mid-level representations towards conformance with higher level context expectation. The affine transformations are made more effective through the disentanglement of characteristics at the injection level produced by the orthogonalizing bias. During test time the CNN runs as a single stream (without the connection across the streams of multiple samples which the contrastive loss introduces), and a context expectation selects the affine transformation to apply over the feature vectors of the injection level. This expectation can come from any source - in Section 4 we compare performance across network produced context expectations and ground truth context expectations. 3.1 LOSS AND TRAINING Training is broken into two stages, as detailed in Table 1. In the first, the base network is trained on its own, and features are biased towards orthogonality at the injection levels. In the second stage, the learning rate for the network parameters is reduced, and affine transformations are added to the injection levels according to the context categories of input samples, initialized to identity matrices with added random noise, and given their own optimizers and learning rate. In the first stage, gradients backpropogate through the base network only, bypassing the affine transformations; in the second stage gradients pass through both the base network and the affine transformations. In each stage we employ batches containing equal proportions of samples belonging to each context. Training is broken into two stages for a few reasons: 1) this division allows the possibility of using pretrained networks and training affine transformations in injection levels with no modification to the base network (λ = 0 and η N2 = 0), 2) allows fine-tuning of pretrained networks (λ > 0 and/or η N2 > 0), 3) we find that starting training of affine transformations after the feature representations have had a chance to converge helps the affine transformations train - intuitively, the affine transformations have to adapt to less of a moving target. We allow the network parameters to continue to train after affine transformations have been introduced - at a reduced learning rate - as we find that this benefits performance. To illustrate this process, consider a batch containing a horse and a desk image, horse belongs to animate while desk belongs to inanimate context. Both the images are fed in the same batch to the network during training. When training is in Stage 1, each image passes through the base network, sans affine transformations. The feature representations of the injection levels for horse and desk are then connected to each other via LO, and biased towards orthogonality with respect to each other. When training progresses to Stage 2, the LO contrastive connection is removed, and affine transformations are inserted according to context expectation. Both network and affine transformation parameters are then updated according to gradients that pass through both affine transformations and network parameters. We find that the angles between mid-level features associated with higher level contexts can be increased to a much greater degree than they would be without this orthogonalizing loss, as measured by cosine similarity, without appreciable negative impacts on performance. See Figure 6 in the Appendix for an illustration of the extent to which this cosine loss is reduced when introducing this orthogonalizing loss. LO(F, Y ) = 1 |SF,Y | ∑ (f1,f2)∈SF,Y max(0, f1 · f2 ∥f1∥∥f2∥ ) (1) Where SF,Y = {(f1, f2)|fi ∼ U(Fci), YC(f1) ̸= YC(f2), I(f1) = I(f2)} (2) Where Fci = {f |YC(f) = ci} (3) Where |SF,Y | is a method hyper-parameter, YC(f) is the context of the sample from which feature vector f was produced, I(f) is the injection level from which f was taken, and U(A) is the uniform probability distribution over elements of A. With LO we wish to separate the angles of features associated with different contexts, in order to better enable manipulation through affine transformations. This can be seen as an exacerbation of CNN’s natural tendency towards decoupled representations - where feature type has a tendency to group according to feature vector angle - through a structuring of characteristics’ feature vector angles according to higher level context. We do this through a cosine loss ( A·B∥A∥∥B∥ ) applied to the features of the injection level. As we wish to control the network representation in terms of context expectation, we apply this loss across context. Figure 7 in the Appendix illustrates the behavior produced by the orthogonalizing bias. 4 EXPERIMENTS Here we cover experiments demonstrating the utility of our feedback method. We perform evaluations over CIFAR100 (Krizhevsky et al., 2009), ImageNet (Deng et al., 2009), and the Caltech UCSD Birds Data set (Birds) (Wah et al., 2011), all with multiple context splits - these datasets are described in Section B of the Appendix and derived based on the CIFAR-100 superclasses, and the attribute labels provided in the CUB dataset. We base splits on this information in order to evaluate over standard divisions in the data. We evaluate our method using both ground truth context expectations (GT Feedback), as well as context expectations derived from a network of the same structure as the base network (Pred Feedback). All experiments are conducted using a 6-layer CNN base architecture, a VGG-16 network (Simonyan & Zisserman, 2014), and a Transformer model (Tu et al., 2022), with variants including added affine transformations for feedback runs. The hyperparameters to the modifications made to each of the base architectures for feedback incorporation are described briefly in Section H in the Appendix, and described in detail in Section I in the Appendix. We show confusion matrices for a 10-class context split over CIFAR100 for both ground truth context expectations and network predicted context expectations, in Figures 5a and 5b, respectively. In Sections 4.1 and 4.2 evaluation involves comparisons to base architectures tuned to maximize base architecture performance, and Section 4.3 presents an evaluation where both base architecture and feedback implementation are tuned to maximize the benefit of feedback, giving insight into extent of potential benefit to feedback. 4.1 ORACLE EVALUATION The ground truth context model (GT Feedback) assumes access to the ground truth context belonging to each input sample during training and test time, using contextual knowledge to index into the affine transformations for application over mid-level features as described in Section 3. Here we evaluate the extent to which our complete framework outperforms 1) a base architecture mirroring that of our framework, and 2) the same base architecture where a hard masking operation using the ground truth contextual knowledge is applied over its class-level outputs. The GT Masking baseline corresponds to the same base architecture where a hard mask of cg ∈ {0, 1}k is applied over the output of the network, where k corresponds to the number of classes and cg takes on a value of 1 for (a) With orthogonalization bias and affine transformations. (b) With neither orthogonalization bias nor affine transformations. Figure 5: Confusion matrices for a our simple architecture (see Appendix ) with and without feedback (orthogonalization bias and with application of affine transformations) - subfigure a and b respectively - over the vehicles 1 vs vehicles 2 split of CIFAR100. The first 5 rows correspond to the first context, and the next 5 rows correspond to the second context. Note that cross context confusion (quadrants 1 and 3 of the confusion matrices) is significantly reduced with feedback. within-context classes and 0 for out-of-context classes. Results for ground truth context evaluation are shown in Figure 3. 4.2 PREDICTED CONTEXT EVALUATION Table 2 shows performance of base architectures with and without feedback, when context is predicted rather than provided. Context prediction is performed through the addition of a second logits classification head to the base networks. This head is trained in conjunction with the object classification head using ground truth context labels. Performance for context prediction is shown in Table 6 in the Appendix. 4.3 MAXIMIZING FEEDBACK GAIN We here present an evaluation where we tune for maximizing margins of feedback performance over base model performance, tuning both base model and feedback parameters. This evaluation provides insight into the degree to which feedback is capable of improving performance. We tune over the following hyperparameters to maximize the margins between feedback models relying on ground truth and the base models without feedback: first stage learning rate, weight decay, second stage learning rate, and affine learning rate. See the Appendix for precise values. 4.4 DISCUSSION We evaluate the utility of feedback under two scenarios: 1) where context is known (Table 3), 2) where context is not known (Table 2). This provides insight both on the performance of the feedback mechanism in ideal cases (providing an upper bound on the utility of feedback), as well as realistic cases with imperfect information derived from the same network. We observe consistently positive results in each case. We also compare feedback to a strong alternative of masking out (excluding) predictions not associated with the ground truth context. Consistent with Figure 5 superior performance of feedback over masking demonstrates that it improves modeling beyond simply removing context inconsistent predictions. Results were presented both in comparisons where base architectures were tuned to maximize accuracy, and comparisons where base and feedback models were tuned to maximize the benefit of feedback. Margins are much higher when tuning to maximize the benefit of feedback, and give insight into the extent to which feedback is capable of providing benefits in the best case. 5 CONCLUSION We have presented an argument for the utility of feedback in vision. Feedback is 1) prominent in biological vision, with the majority of neural connections in the cortex consisting of feedback connections, 2) allows better constraining of under-constrained processes of abstraction, 3) allows for the online adaptation of vision systems towards alignment with high level understanding of the world. We leverage the fact that CNNs have a tendency towards decoupled representations, exacerbating the separation of mid-level features associated with different higher level contexts. This allows better direct manipulation of the level at which feedback is introduced, minimizing collateral effects on characteristics not being selected for. In contrast to post-hoc filtering of interpretations for consistency with context expectations, MVF allows for cross-context detections and produces higher accuracies. MVF involves a top-down bridging of the signal-symbol gap, making it applicable to a range of applications. In the future this work will be extended to localization, e.g. object detection or semantic segmentation, as well as used in embodied contexts Fermüller & Maynord (2022) with an active agent. B DATASETS B.1 CIFAR100 We adopt CIFAR100 for the 1) high-levels of visual ambiguity due to low resolution, 2) existence of several distinct ”superclasses” consisting of a roughly equal number of classes, and 3) the crosscontext confusion across classes highly similar in appearance (e.g., sharks and dolphins). We adopt the official training and test split for the CIFAR100 dataset. Each class contains exactly 500 training images and 100 testing images, with each superclass consisting of 5000 training images and 1000 testing images. We use the CIFAR100 superclasses in constructing context splits. Split 1: Vehicles 1 vs. Vehicles 2, Split 2: Household Devices vs. Furniture, Split 3: Aquatic Mammals vs. Fish. For full class breakdown see Appendix Section K. B.2 IMAGENET We adopt ImageNet for several of the aforementioned reasons above, as well as its generality in that it spans 1000 unique classes. Each class contains variable number of images - we designate 80% of the images in each class for training, but only 2% for testing due to the computational cost incurred by the high number of images and the need for frequent testing. We employ the following context split over ImageNet, designed to be similar to CIFAR100 splits, the full class breakdown of which is given in Appendix: Split 1: Household Devices vs. Furniture, Split 2: Aquatic Mammals vs. Fish, Split 3: Vehicles 1 vs. Vehicles 2. C BASE CNN Figure 8 illustrates the base CNN model (apart from VGG and ViT), for which performance is reported in Tables in the main paper. D CONTEXT PREDICTION See Table 6 for accuracy on context prediction used in Section 4.2 and Table 2; see Table 7 for accuracy on context prediction used in Section 4.2 and Table 2. E IMAGENET-C EVALUATION Table 8 provides accuracy of testing on ImageNet-C, with models trained for Maximizing Feedback Gain over standard ImageNet. Accuracy trends are consistent with trends presented in Section 4.3. ImageNet-C consists of 75 common corruptions applied over ImageNet images with the intent of degrading classifier performance. We observe that drops in accuracy with respect to the original ImageNet dataset range between values of 14% and 18%. However, performance margins between feedback and base models are overall maintained when testing over ImageNet-C. F DECOUPLED REPRESENTATIONS CNNs have a natural tendency towards decoupled representations. These are representations where characteristics have a tendency to be represented in such a way that feature vector angle corresponds to characteristic type, while feature vector magnitude corresponds to characteristic variation or degree (Liu et al., 2018). G ORTHOGONALIZING LOSS Figure 7 illustrates feature vector projections of the injection level under different degrees of orthogonalizing loss. H MODELS Here we describe the architectures in which we incorporate feedback. Each model consumes 1 GPU during train and test time. For all feedback experiments we choose a λ (intermediate loss scaling) of 1.0 (otherwise set to 0.0). Shallow CNN: This model comprises a 6-layer CNN architecture, shown in Appendix, consisting of 3 by 3 shaped kernels, max-pooling applied over every other layer, and dropout (p = 0.375, p = 0.1) applied over the penultimate fully connected layer and after each convolution operation, respectively. The affine transformation is applied after the second to last convolution operation, though we observe high performance inserting the affine transformations anywhere throughout the second half of the architecture. We train the first stage for roughly 5 million iterations for all splits. A learning rate of 0.001 is chosen for the training of the base network during the first stage, and a learning rate of 5 ∗ 10−5 is chosen for the learning rate of the base network during the second stage, whereas the affine transformation learning rate is set to 1 ∗ 10−3. For the Maximizing Feedback Gain hyperparameters, we adopt a learning rate of 2e− 4, a weight decay of 0.0, a second stage learning rate of 1e− 6, and an affine learning rate of 0.005. VGG: Here we adopt a VGG-16 network with pre-trained weights over ImageNet. The VGG network consists of 16 layers consisting of convolution and max-pooling operations. The affine transformation is applied after the eleventh convolution operation, though we observe high performance inserting the affine transformations anywhere throughout the last six layers. We train the first stage for roughly 1.5 million iterations for all splits, until smooth convergence. A learning rate of 5∗10−6 is chosen for the training of the base network during the first stage, and a learning rate of 2.5 ∗ 10−6 is chosen for the learning rate of the base network during the second stage, whereas the affine transformation learning rate is set to 2.5 ∗ 10−3. For the Maximizing Feedback Gain hyperparameters, we adopt a learning rate of 5e− 5, a weight decay of 0.00075, a second stage learning rate of 5e− 6, and an affine learning rate of 0.0005. Visual Transformer: Here we adopt a variant of the Visual Transformer models (Tu et al., 2022), a general-purpose vision transformer that outperforms many related visual transformer architectures while being easy to train. The affine transformation is applied immediately after the third to last attention block. We train the first stage for roughly 2.0 million iterations for all splits, until smooth convergence. A learning rate of 1 ∗ 10−3 is chosen for the training of the base network during the first stage, and a learning rate of 1.0∗10−6 is chosen for the learning rate of the base network during the second stage, whereas the affine transformation learning rate is set to 1 ∗ 10−3. For the Maximizing Feedback Gain hyperparameters, we adopt a learning rate of 2e− 4, a weight decay of 0.0, a second stage learning rate of 1e− 5 and an affine learning rate of 0.0001. I PARAMETERS We here list parameters’ tuned values not introduced in the main paper: 1. Image size: 32× 32 for CIFAR100, 224× 224 for ImageNet. 2. Model input image size: 32× 32 for 6-layer CNN, 224× 224 for VGG16. Images resized using bilinear interpolation. 3. Size of feature set selected for orthogonalization: 25. 4. Batch size: 256 (CIFAR100 splits), 64 (ImageNet splits). 5. Data augmentations: Random rotations (15 degrees), random resized crops, Random hori- zontal flips. 6. Feedback Base Model: ADAM’s optimizer, weight decay of 7.5×10−4 for both stages and both models. 7. Affine Transformation Optimizer: ADAM’s optimizer, affine transformation learning rate of 0.001 for second stage training of both models. 8. Context Model: ResNet18 model with pretrained weights over ImageNet and learning rate of 0.001 using SGD optimizer. J CONTEXT LABEL, AFFINE LEARNING RATE, ORTHOGONALIZING LOSS ABLATION In Table 9, we evaluate the effect on performance due simply to the introduction of the affine transformation (and the random noise introduced by its introduction), but not due to the context training labels. We report numbers from experiments where the affine operations are included in the network but: affine transformations are not trained, the context prediction head is not trained, and orthogonalizing loss is not employed. These runs are compared against identical runs where the affine transformation is not included. We observe that runs with affine transformations outperform the results of the base models (where no affine transformations are included), for two main possible reasons: 1) The drop in learning rate during the second stage of training allows accuracy to continue converging after possible plateauing, and 2) The introduction of a randomly initialized affine during the second stage introduces stochasticity potentially useful during training. This increase in performance is small in comparison to the increase due to incorporation of feedback. K DATA SPLITS We derive context splits based on the superclass structure provided with CIFAR-100 (over both CIFAR-100 and ImageNet), and the attribute ontology provided with the CUB dataset. We base splits on this information in order to evaluate over standard divisions in the data. K.1 CUB-200-2011 We adopt the Caltech-UCSD-Birds dataset for several of the aformentioned reasons above, in particular for the high cross-context confusion across different species of birds highly similar in appearance. It consists of 11,788 images with 200 classes corresponding to bird species. Like the Imagenet dataset, we designate 80% of the dataset for training and 20% for testing. We employ the following 3 splits over the CUB dataset, grouping images into contexts based on the listed attributes provided with the CUB dataset: 1. Migration behavior (1, 2, 3) 2. Trophic level (Carnivore, Herbivore, Omnivore) 3. Primary lifestyle (Aerial, Aquatic, Generalist, Insessorial, Terrestrial) K.2 SPLITCIFAR CIFAR100 Dataset Sub-Splits Split Group Classes 1 Vehicles 1 Bicycle, Bus, Motorcycle, Pickup truck Vehicles 2 Lawn mower, Rocket, Streetcar, Tank, Tractor 2 Household Devices Clock, Keyboard, Lamp, Telephone, television Furniture Bed, Chair, Couch, Table, Wardrobe 3 Aquatic mammals Beaver, Dolphin, Otter, Seal, Whale Fish aquarium fish, flatfish, ray, shark, trout 4 Small animals fox, porcupine, possum, raccoon, skunk Large animals bear, leopard, lion, tiger, wolf Full CIFAR100 Split: animate = beaver, dolphin, otter, seal, whale, aquarium fish, flatfish, ray, shark, trout, bear, leopard, lion, tiger, wolf, camel, cattle, chimpanzee, elephant, kangaroo, fox, porcupine, possum, raccoon, skunk, baby, boy, girl, man, woman, crocodile, dinosaur, lizard, snake, turtle, hamster, mouse, rabbit, shrew, squirrel, bee, beetle, butterfly, caterpillar, cockroach, crab, lobster, snail, spider, worm inanimate = orchid, poppy, rose, sunflower, tulip, bottle, bowl, can, cup, plate, apple, mushroom, orange, pear, sweet pepper, clock, keyboard, lamp, telephone, television, bed, chair, couch, table, wardrobe, bridge, castle, house, road, skyscraper, cloud, forest, mountain, plain, sea, maple tree, oak tree, palm tree, pine tree, willow tree, bicycle, bus, motorcycle, pickup truck, train, lawn mower, rocket, streetcar, tank, tractor K.3 SPLITIMAGENET ImageNet Dataset Splits Split Group Classes 1 Household Devices analog clock, digital clock, wall clock, computer keyboard,dial telephone, table lamp, television, cellular telephone Furniture studio couch, dining table, wardrobe, folding chair 2 Aquatic mammals Beaver, Dolphin, Otter, Seal, Whale Fish barracouta, eel, coho, rock beauty, anemone fish,sturgeon, gar, puffer, lionfish 3 Devices 1 mountain bike, bicycle-built-for-two,school bus, moped,tricycle, bullet train, passenger car, pickup Devices 2 lawn mower, tractor, streetcar, tank
1. What is the main contribution of the paper regarding mid-level vision feedback modules for CNNs? 2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its impact on performance? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the experimental design and control, as well as the focus on performance rather than other tasks?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper provides a mid level vision feedback module as an Add-On for CNN's. Authors argue that such mid-level feedback properties improve CNN performance (though perhaps for the wrong reasons), and also argue that this idea is worth pursuing given neuroscience/perceptual psychology motivated ideas. Strengths And Weaknesses See below for Main Paper Summary. TLDR: this paper studies mid-level vision as an improvement gateway for modern CNNs, however I do not think the baselines or experiments are well controlled enough to validate the statement. In addition it is not obvious what the contributions to performance are that stem from noise injections, recurrence and SSL-like learning when isolated, and not stacked together. In addition authors only focus on performance and not other (perhaps more interesting) tasks that extend object recognition such as common corruption robustness or adversarial robustness. Clarity, Quality, Novelty And Reproducibility The work is a bit difficult to understand with regards to flow. I understand where the authors are going, but am not entirely convinced I got there in a clear way, so I perhaps could have missed something in my review that could change my mind. I'm also not sure why authors did not select neuroscience as the primary category for review and instead opted for "Applications"
ICLR
Title Mid-Vision Feedback Abstract Feedback plays a prominent role in biological vision, where perception is modulated based on agents’ evolving expectations and world model. We introduce a novel mechanism which modulates perception based on high level categorical expectations: Mid-Vision Feedback (MVF). MVF associates high level contexts with linear transformations. When a context is ”expected” its associated linear transformation is applied over feature vectors in a mid level of a network. The result is that mid-level network representations are biased towards conformance with high level expectations, improving overall accuracy and contextual consistency. Additionally, during training mid-level feature vectors are biased through introduction of a loss term which increases the distance between feature vectors associated with different contexts. MVF is agnostic as to the source of contextual expectations, and can serve as a mechanism for top down integration of symbolic systems with deep vision architectures. We show the superior performance of MVF to post-hoc filtering for incorporation of contextual knowledge, and show superior performance of configurations using predicted context (when no context is known a priori) over configurations with no context awareness. 1 N/A Feedback plays a prominent role in biological vision, where perception is modulated based on agents’ evolving expectations and world model. We introduce a novel mechanism which modulates perception based on high level categorical expectations: Mid-Vision Feedback (MVF). MVF associates high level contexts with linear transformations. When a context is ”expected” its associated linear transformation is applied over feature vectors in a mid level of a network. The result is that mid-level network representations are biased towards conformance with high level expectations, improving overall accuracy and contextual consistency. Additionally, during training mid-level feature vectors are biased through introduction of a loss term which increases the distance between feature vectors associated with different contexts. MVF is agnostic as to the source of contextual expectations, and can serve as a mechanism for top down integration of symbolic systems with deep vision architectures. We show the superior performance of MVF to post-hoc filtering for incorporation of contextual knowledge, and show superior performance of configurations using predicted context (when no context is known a priori) over configurations with no context awareness. 1 1 INTRODUCTION In most contemporary computer vision architectures information flows in a single direction: from low-level of pixels up to high level abstract concepts (e.g., object categories) - such architectures are termed feed-forward architectures. In general, each successive layer of the network contains more abstract representations than the previous, and the representational hierarchy mirrors the architectural hierarchy. It is also possible to introduce top-down connections into the network architecture, introducing high level information into processes involving lower levels of abstraction in a process of feedback. Feedback plays a primary role in biological vision; in fact, the majority of neural connections in the visual cortex are top-down, rather than bottom-up, connections (Markov et al., 2014). These topdown connections are thought to convey information of higher level expectation, and neurons of the visual cortex use both higher level expectation as well as lower level visual information in producing their representations. Expectations in biological systems arise from continuous engagement with the environment. In Computer Vision, this is reflected in the paradigm of Active Vision (Bajcsy, 1988; Fermüller & Aloimonos, 1995), where perception is framed as an active problem involving evolving world models. The task of producing mid-level visual representations Teo et al. (2015a;b); Xu et al. (2012); Nishigaki et al. (2012) from low level input is under-constrained - many plausible mid-level interpretations may be consistent with input. To give an intuition for how understanding of context can impact perception of mid-level features consider Figure 1 - characteristics of shrews and kiwi differ, but may be similar enough to be confused without context. Top-down feedback - from high level context to mid-level visual features - provides a “map” for mid-level processes, constraining it towards high level consistency. 1Code will be available at: https://github.com/maynord/Mid-Vision-Feedback Introduction of contextual knowledge through feedback is superior to post-hoc application of contextual knowledge, e.g. through discarding interpretations (classifications, here) which are not context consistent. We demonstrate this point empirically. Interpretations selected after post-hoc filtering for context consistency will still be built upon underconstrained mid-level features. Furthermore, in contrast to post-hoc filtering, feedback naturally allows for detection of out-of-context objects, as feedback functions through biasing of visual representations rather than filtering. It is valuable for methods to allow for out-of-context detections, even when biasing against them, as out-of-context objects on occasion appear (e.g., a tree in an office setting). CNNs have a natural tendency towards decoupled representations - representations with a tendency for feature vector angle to correspond to characteristic type (e.g., ”fuzzy”), and for feature vector magnitude to correspond to characteristic variation or degree (Liu et al., 2018) (e.g., ”very fuzzy” / ”not fuzzy”) (See Figure 2 for an illustration). This opens up a couple of possibilities in terms of directly manipulating feature representations: 1) We can differentiate between axes with different associations to high level contexts, 2) we can control magnitudes of characteristics through amplifying and dampening axes associated with those characteristics. That is, w.r.t. point #1, as CNNs produce representations which are, to a degree, separated by angle, certain axes will be more associated with some higher level contexts over others. Also, w.r.t. point #2, amplifying characteristics associated with a higher level context increases the likelihood of interpreting input as conforming to that context; dampening characteristics associated with that context reduces the likelihood of interpreting input as conforming to that context. We present a principled method to feedback - Mid-Vision Feedback (MVF), illustrated in Figure 3 - allowing the biasing of mid-level feature representations in networks such as CNNs towards conformance with high level categorical expectations. This approach is comprised of two components: 1) linear transforms (affine transformations), and 2) orthogonalization bias. Affine transformations enable direct control over the feature vectors at the injection level - the level into which feedback is being inserted. If these vectors have been trained with a bias towards orthogonality w.r.t. contexts, then this allows for affine transformations to manipulate features associated with context presence or absence with less impact on other features. The orthogonalization bias is introduced to increase the independence between contexts, so they can be manipulated with less interference. This bias is introduced at the injection level. This does not negatively impact the representational power or performance of the base network. This orthogonality bias is introduced across contexts - e.g., mid-level feature vectors associated with animate contexts can be biased towards orthogonality w.r.t. mid-level features associated with inanimate contexts. Due to the resulting greater angular separation between features associated with different contexts, this biasing allows greater control over facets of mid-level representation which are meaningful to higher level contexts. MVF then functions as follows. During runtime a high level context expectation is associated with input. This expectation is used in biasing mid-level visual features through use of an affine transformation associated with the context of that expectation. This selects for characteristics associated with this context. These affine transformations are better enabled as a consequence of the disentanglement of such characteristics at the injection level, effected through introduction of the orthogonalization bias during training. Feedback then enables a synergy between high level categorical interpretations and mid-level visual feature representations, bridging the signal-symbol gap in both directions. This approach to incorporation of context expectations is controlled. This differs from an approach of connecting the upper level fully connected layers of a network directly to lower level convolutional levels in a scheme which includes neither categorical representations nor biasing w.r.t. said categorical representations. MVF both employs feedback from categorical knowledge and is agnostic w.r.t. the source of that categorical knowledge - i.e., it is not a requirement that context expectations be produced from the same network. As a consequence of this, MVF allows for interfacing with larger symbolic systems - e.g., models of scenes employing graphical models over scene elements and categories. This topdown synergy across the signal-symbol gap opens up a wide range of applications. The rest of this paper is structured as follows: In Section 2 we cover related work; in Section 3 we detail methods; in Section 4 we cover experiments; and, in Section 5 we conclude. 2 RELATED WORK 2.1 BIOLOGY, FEEDBACK, AND PARALLELS TO COMPUTER VISION Previous works ( (Markov et al., 2014), (Gilbert & Sigman, 2007), Kreiman & Serre (2020), (Gilbert & Li, 2013), and (Paneri & Gregoriou, 2017)) have explored the importance of feedback connections in biological sensory perception. Further work ((Liao & Poggio, 2016) and (van Bergen & Kriegeskorte, 2020)) draw connections between feedback in computer vision architectures and the primate visual cortex. (Tang et al., 2018) show that feedforward CNNs are not robust to occlusion, unlike in human perception, but that adding recurrence improves occlusion robustness. (Lotter et al., 2016) introduce PredNet, a network based on predictive coding, and demonstrate benefits on the task of self-supervised frame prediction. There is good reason to believe that modeling characteristics of biological vision in computer vision architectures will benefit computer vision (e.g., (Medathati et al., 2016; Teo et al., 2015c)). For example, (Linsley et al., 2020b) demonstrate a network with top-down connections which aligns with human perception of visual illusions, where feedback aids in prioritizing object boundary contours over simple edge contours. (Linsley et al., 2020a) shows how recurrent hierarchical feedback model can improve segmentation. (Konkle & Alvarez, 2020) introduce instance-prototype contrastive learning, and show that self-supervised models can learn representations which are more brain-like than supervised models. (Li et al., 2021) introduce Contrastive Clustering, showing a benefit to instance- as well as cluster-level contrastive loss in clustering. (Long et al., 2018) demonstrate large scale organization of the cortex based on mid-level visual features (below the level of object recognition), including those associated with animacy vs. inanimacy. (Jagadeesh & Gardner, 2022) argue that representations in category selective regions of the visual cortex encode a basis representation for texture, rather than objecthood representations. (Harrington & Deza, 2021) demonstrate that constraining networks to be robust to adversarial input produces network representations more in-line with human visual perception, and argue for the use of texture summary statistic representations. 2.2 FEEDBACK IN EXISTING COMPUTER VISION METHODS The conventional approach to feedback in computer vision is the use of recurrent connections. (Caswell et al., 2016) introduce recurrent connections into shallow CNN architecture for image classification. (Pinheiro & Collobert, 2014) employ recurrency over convolutions for the purpose of enabling lateral information flow in the task of segmentation. (Zamir et al., 2016) instantiate feedback through an RNN architecture which iteratively refines prediction categories from coarse to specific. Alternatives to conventional recurrent connections for feedback include (Hu & Ramanan, 2016), which explore convolutions with hierarchical rectified Gaussians to enable top-down as well as bottom-up information flow, and apply them to the task of keypoint localization under occlusion. Additionally, (Yao et al., 2012) apply a graphical model over scene representations, allowing higher and lower level decisions to influence each other. 3 METHODS With MVF we seek a feedback mechanism which allows us to directly bias lower level feature representations based on categorical higher level context expectations. This involves top-down interaction across two levels of abstraction: 1) high level contexts ci ∈ C, 2) mid-level features fi ∈ F . The structure of MVF is illustrated in Figure 3, and the loss and training formulation given in Section 3.1. Context expectations sit at a level of abstraction above the classes of network output, and are used in selecting affine transformations placed above the output of injection levels. When applied to injection level output, the affine transformations bias injection level feature representations towards conformance with the associated context expectation, as illustrated in Figure 4. Injection level output is made more amenable to manipulation according to context through introduction of a contrastive loss LO, introducing a bias towards orthogonalization across context. Affine transformations are applied over the features of the injection level for the purpose of amplifying or dampening certain characteristics. This process aligns mid-level representations towards conformance with higher level context expectation. The affine transformations are made more effective through the disentanglement of characteristics at the injection level produced by the orthogonalizing bias. During test time the CNN runs as a single stream (without the connection across the streams of multiple samples which the contrastive loss introduces), and a context expectation selects the affine transformation to apply over the feature vectors of the injection level. This expectation can come from any source - in Section 4 we compare performance across network produced context expectations and ground truth context expectations. 3.1 LOSS AND TRAINING Training is broken into two stages, as detailed in Table 1. In the first, the base network is trained on its own, and features are biased towards orthogonality at the injection levels. In the second stage, the learning rate for the network parameters is reduced, and affine transformations are added to the injection levels according to the context categories of input samples, initialized to identity matrices with added random noise, and given their own optimizers and learning rate. In the first stage, gradients backpropogate through the base network only, bypassing the affine transformations; in the second stage gradients pass through both the base network and the affine transformations. In each stage we employ batches containing equal proportions of samples belonging to each context. Training is broken into two stages for a few reasons: 1) this division allows the possibility of using pretrained networks and training affine transformations in injection levels with no modification to the base network (λ = 0 and η N2 = 0), 2) allows fine-tuning of pretrained networks (λ > 0 and/or η N2 > 0), 3) we find that starting training of affine transformations after the feature representations have had a chance to converge helps the affine transformations train - intuitively, the affine transformations have to adapt to less of a moving target. We allow the network parameters to continue to train after affine transformations have been introduced - at a reduced learning rate - as we find that this benefits performance. To illustrate this process, consider a batch containing a horse and a desk image, horse belongs to animate while desk belongs to inanimate context. Both the images are fed in the same batch to the network during training. When training is in Stage 1, each image passes through the base network, sans affine transformations. The feature representations of the injection levels for horse and desk are then connected to each other via LO, and biased towards orthogonality with respect to each other. When training progresses to Stage 2, the LO contrastive connection is removed, and affine transformations are inserted according to context expectation. Both network and affine transformation parameters are then updated according to gradients that pass through both affine transformations and network parameters. We find that the angles between mid-level features associated with higher level contexts can be increased to a much greater degree than they would be without this orthogonalizing loss, as measured by cosine similarity, without appreciable negative impacts on performance. See Figure 6 in the Appendix for an illustration of the extent to which this cosine loss is reduced when introducing this orthogonalizing loss. LO(F, Y ) = 1 |SF,Y | ∑ (f1,f2)∈SF,Y max(0, f1 · f2 ∥f1∥∥f2∥ ) (1) Where SF,Y = {(f1, f2)|fi ∼ U(Fci), YC(f1) ̸= YC(f2), I(f1) = I(f2)} (2) Where Fci = {f |YC(f) = ci} (3) Where |SF,Y | is a method hyper-parameter, YC(f) is the context of the sample from which feature vector f was produced, I(f) is the injection level from which f was taken, and U(A) is the uniform probability distribution over elements of A. With LO we wish to separate the angles of features associated with different contexts, in order to better enable manipulation through affine transformations. This can be seen as an exacerbation of CNN’s natural tendency towards decoupled representations - where feature type has a tendency to group according to feature vector angle - through a structuring of characteristics’ feature vector angles according to higher level context. We do this through a cosine loss ( A·B∥A∥∥B∥ ) applied to the features of the injection level. As we wish to control the network representation in terms of context expectation, we apply this loss across context. Figure 7 in the Appendix illustrates the behavior produced by the orthogonalizing bias. 4 EXPERIMENTS Here we cover experiments demonstrating the utility of our feedback method. We perform evaluations over CIFAR100 (Krizhevsky et al., 2009), ImageNet (Deng et al., 2009), and the Caltech UCSD Birds Data set (Birds) (Wah et al., 2011), all with multiple context splits - these datasets are described in Section B of the Appendix and derived based on the CIFAR-100 superclasses, and the attribute labels provided in the CUB dataset. We base splits on this information in order to evaluate over standard divisions in the data. We evaluate our method using both ground truth context expectations (GT Feedback), as well as context expectations derived from a network of the same structure as the base network (Pred Feedback). All experiments are conducted using a 6-layer CNN base architecture, a VGG-16 network (Simonyan & Zisserman, 2014), and a Transformer model (Tu et al., 2022), with variants including added affine transformations for feedback runs. The hyperparameters to the modifications made to each of the base architectures for feedback incorporation are described briefly in Section H in the Appendix, and described in detail in Section I in the Appendix. We show confusion matrices for a 10-class context split over CIFAR100 for both ground truth context expectations and network predicted context expectations, in Figures 5a and 5b, respectively. In Sections 4.1 and 4.2 evaluation involves comparisons to base architectures tuned to maximize base architecture performance, and Section 4.3 presents an evaluation where both base architecture and feedback implementation are tuned to maximize the benefit of feedback, giving insight into extent of potential benefit to feedback. 4.1 ORACLE EVALUATION The ground truth context model (GT Feedback) assumes access to the ground truth context belonging to each input sample during training and test time, using contextual knowledge to index into the affine transformations for application over mid-level features as described in Section 3. Here we evaluate the extent to which our complete framework outperforms 1) a base architecture mirroring that of our framework, and 2) the same base architecture where a hard masking operation using the ground truth contextual knowledge is applied over its class-level outputs. The GT Masking baseline corresponds to the same base architecture where a hard mask of cg ∈ {0, 1}k is applied over the output of the network, where k corresponds to the number of classes and cg takes on a value of 1 for (a) With orthogonalization bias and affine transformations. (b) With neither orthogonalization bias nor affine transformations. Figure 5: Confusion matrices for a our simple architecture (see Appendix ) with and without feedback (orthogonalization bias and with application of affine transformations) - subfigure a and b respectively - over the vehicles 1 vs vehicles 2 split of CIFAR100. The first 5 rows correspond to the first context, and the next 5 rows correspond to the second context. Note that cross context confusion (quadrants 1 and 3 of the confusion matrices) is significantly reduced with feedback. within-context classes and 0 for out-of-context classes. Results for ground truth context evaluation are shown in Figure 3. 4.2 PREDICTED CONTEXT EVALUATION Table 2 shows performance of base architectures with and without feedback, when context is predicted rather than provided. Context prediction is performed through the addition of a second logits classification head to the base networks. This head is trained in conjunction with the object classification head using ground truth context labels. Performance for context prediction is shown in Table 6 in the Appendix. 4.3 MAXIMIZING FEEDBACK GAIN We here present an evaluation where we tune for maximizing margins of feedback performance over base model performance, tuning both base model and feedback parameters. This evaluation provides insight into the degree to which feedback is capable of improving performance. We tune over the following hyperparameters to maximize the margins between feedback models relying on ground truth and the base models without feedback: first stage learning rate, weight decay, second stage learning rate, and affine learning rate. See the Appendix for precise values. 4.4 DISCUSSION We evaluate the utility of feedback under two scenarios: 1) where context is known (Table 3), 2) where context is not known (Table 2). This provides insight both on the performance of the feedback mechanism in ideal cases (providing an upper bound on the utility of feedback), as well as realistic cases with imperfect information derived from the same network. We observe consistently positive results in each case. We also compare feedback to a strong alternative of masking out (excluding) predictions not associated with the ground truth context. Consistent with Figure 5 superior performance of feedback over masking demonstrates that it improves modeling beyond simply removing context inconsistent predictions. Results were presented both in comparisons where base architectures were tuned to maximize accuracy, and comparisons where base and feedback models were tuned to maximize the benefit of feedback. Margins are much higher when tuning to maximize the benefit of feedback, and give insight into the extent to which feedback is capable of providing benefits in the best case. 5 CONCLUSION We have presented an argument for the utility of feedback in vision. Feedback is 1) prominent in biological vision, with the majority of neural connections in the cortex consisting of feedback connections, 2) allows better constraining of under-constrained processes of abstraction, 3) allows for the online adaptation of vision systems towards alignment with high level understanding of the world. We leverage the fact that CNNs have a tendency towards decoupled representations, exacerbating the separation of mid-level features associated with different higher level contexts. This allows better direct manipulation of the level at which feedback is introduced, minimizing collateral effects on characteristics not being selected for. In contrast to post-hoc filtering of interpretations for consistency with context expectations, MVF allows for cross-context detections and produces higher accuracies. MVF involves a top-down bridging of the signal-symbol gap, making it applicable to a range of applications. In the future this work will be extended to localization, e.g. object detection or semantic segmentation, as well as used in embodied contexts Fermüller & Maynord (2022) with an active agent. B DATASETS B.1 CIFAR100 We adopt CIFAR100 for the 1) high-levels of visual ambiguity due to low resolution, 2) existence of several distinct ”superclasses” consisting of a roughly equal number of classes, and 3) the crosscontext confusion across classes highly similar in appearance (e.g., sharks and dolphins). We adopt the official training and test split for the CIFAR100 dataset. Each class contains exactly 500 training images and 100 testing images, with each superclass consisting of 5000 training images and 1000 testing images. We use the CIFAR100 superclasses in constructing context splits. Split 1: Vehicles 1 vs. Vehicles 2, Split 2: Household Devices vs. Furniture, Split 3: Aquatic Mammals vs. Fish. For full class breakdown see Appendix Section K. B.2 IMAGENET We adopt ImageNet for several of the aforementioned reasons above, as well as its generality in that it spans 1000 unique classes. Each class contains variable number of images - we designate 80% of the images in each class for training, but only 2% for testing due to the computational cost incurred by the high number of images and the need for frequent testing. We employ the following context split over ImageNet, designed to be similar to CIFAR100 splits, the full class breakdown of which is given in Appendix: Split 1: Household Devices vs. Furniture, Split 2: Aquatic Mammals vs. Fish, Split 3: Vehicles 1 vs. Vehicles 2. C BASE CNN Figure 8 illustrates the base CNN model (apart from VGG and ViT), for which performance is reported in Tables in the main paper. D CONTEXT PREDICTION See Table 6 for accuracy on context prediction used in Section 4.2 and Table 2; see Table 7 for accuracy on context prediction used in Section 4.2 and Table 2. E IMAGENET-C EVALUATION Table 8 provides accuracy of testing on ImageNet-C, with models trained for Maximizing Feedback Gain over standard ImageNet. Accuracy trends are consistent with trends presented in Section 4.3. ImageNet-C consists of 75 common corruptions applied over ImageNet images with the intent of degrading classifier performance. We observe that drops in accuracy with respect to the original ImageNet dataset range between values of 14% and 18%. However, performance margins between feedback and base models are overall maintained when testing over ImageNet-C. F DECOUPLED REPRESENTATIONS CNNs have a natural tendency towards decoupled representations. These are representations where characteristics have a tendency to be represented in such a way that feature vector angle corresponds to characteristic type, while feature vector magnitude corresponds to characteristic variation or degree (Liu et al., 2018). G ORTHOGONALIZING LOSS Figure 7 illustrates feature vector projections of the injection level under different degrees of orthogonalizing loss. H MODELS Here we describe the architectures in which we incorporate feedback. Each model consumes 1 GPU during train and test time. For all feedback experiments we choose a λ (intermediate loss scaling) of 1.0 (otherwise set to 0.0). Shallow CNN: This model comprises a 6-layer CNN architecture, shown in Appendix, consisting of 3 by 3 shaped kernels, max-pooling applied over every other layer, and dropout (p = 0.375, p = 0.1) applied over the penultimate fully connected layer and after each convolution operation, respectively. The affine transformation is applied after the second to last convolution operation, though we observe high performance inserting the affine transformations anywhere throughout the second half of the architecture. We train the first stage for roughly 5 million iterations for all splits. A learning rate of 0.001 is chosen for the training of the base network during the first stage, and a learning rate of 5 ∗ 10−5 is chosen for the learning rate of the base network during the second stage, whereas the affine transformation learning rate is set to 1 ∗ 10−3. For the Maximizing Feedback Gain hyperparameters, we adopt a learning rate of 2e− 4, a weight decay of 0.0, a second stage learning rate of 1e− 6, and an affine learning rate of 0.005. VGG: Here we adopt a VGG-16 network with pre-trained weights over ImageNet. The VGG network consists of 16 layers consisting of convolution and max-pooling operations. The affine transformation is applied after the eleventh convolution operation, though we observe high performance inserting the affine transformations anywhere throughout the last six layers. We train the first stage for roughly 1.5 million iterations for all splits, until smooth convergence. A learning rate of 5∗10−6 is chosen for the training of the base network during the first stage, and a learning rate of 2.5 ∗ 10−6 is chosen for the learning rate of the base network during the second stage, whereas the affine transformation learning rate is set to 2.5 ∗ 10−3. For the Maximizing Feedback Gain hyperparameters, we adopt a learning rate of 5e− 5, a weight decay of 0.00075, a second stage learning rate of 5e− 6, and an affine learning rate of 0.0005. Visual Transformer: Here we adopt a variant of the Visual Transformer models (Tu et al., 2022), a general-purpose vision transformer that outperforms many related visual transformer architectures while being easy to train. The affine transformation is applied immediately after the third to last attention block. We train the first stage for roughly 2.0 million iterations for all splits, until smooth convergence. A learning rate of 1 ∗ 10−3 is chosen for the training of the base network during the first stage, and a learning rate of 1.0∗10−6 is chosen for the learning rate of the base network during the second stage, whereas the affine transformation learning rate is set to 1 ∗ 10−3. For the Maximizing Feedback Gain hyperparameters, we adopt a learning rate of 2e− 4, a weight decay of 0.0, a second stage learning rate of 1e− 5 and an affine learning rate of 0.0001. I PARAMETERS We here list parameters’ tuned values not introduced in the main paper: 1. Image size: 32× 32 for CIFAR100, 224× 224 for ImageNet. 2. Model input image size: 32× 32 for 6-layer CNN, 224× 224 for VGG16. Images resized using bilinear interpolation. 3. Size of feature set selected for orthogonalization: 25. 4. Batch size: 256 (CIFAR100 splits), 64 (ImageNet splits). 5. Data augmentations: Random rotations (15 degrees), random resized crops, Random hori- zontal flips. 6. Feedback Base Model: ADAM’s optimizer, weight decay of 7.5×10−4 for both stages and both models. 7. Affine Transformation Optimizer: ADAM’s optimizer, affine transformation learning rate of 0.001 for second stage training of both models. 8. Context Model: ResNet18 model with pretrained weights over ImageNet and learning rate of 0.001 using SGD optimizer. J CONTEXT LABEL, AFFINE LEARNING RATE, ORTHOGONALIZING LOSS ABLATION In Table 9, we evaluate the effect on performance due simply to the introduction of the affine transformation (and the random noise introduced by its introduction), but not due to the context training labels. We report numbers from experiments where the affine operations are included in the network but: affine transformations are not trained, the context prediction head is not trained, and orthogonalizing loss is not employed. These runs are compared against identical runs where the affine transformation is not included. We observe that runs with affine transformations outperform the results of the base models (where no affine transformations are included), for two main possible reasons: 1) The drop in learning rate during the second stage of training allows accuracy to continue converging after possible plateauing, and 2) The introduction of a randomly initialized affine during the second stage introduces stochasticity potentially useful during training. This increase in performance is small in comparison to the increase due to incorporation of feedback. K DATA SPLITS We derive context splits based on the superclass structure provided with CIFAR-100 (over both CIFAR-100 and ImageNet), and the attribute ontology provided with the CUB dataset. We base splits on this information in order to evaluate over standard divisions in the data. K.1 CUB-200-2011 We adopt the Caltech-UCSD-Birds dataset for several of the aformentioned reasons above, in particular for the high cross-context confusion across different species of birds highly similar in appearance. It consists of 11,788 images with 200 classes corresponding to bird species. Like the Imagenet dataset, we designate 80% of the dataset for training and 20% for testing. We employ the following 3 splits over the CUB dataset, grouping images into contexts based on the listed attributes provided with the CUB dataset: 1. Migration behavior (1, 2, 3) 2. Trophic level (Carnivore, Herbivore, Omnivore) 3. Primary lifestyle (Aerial, Aquatic, Generalist, Insessorial, Terrestrial) K.2 SPLITCIFAR CIFAR100 Dataset Sub-Splits Split Group Classes 1 Vehicles 1 Bicycle, Bus, Motorcycle, Pickup truck Vehicles 2 Lawn mower, Rocket, Streetcar, Tank, Tractor 2 Household Devices Clock, Keyboard, Lamp, Telephone, television Furniture Bed, Chair, Couch, Table, Wardrobe 3 Aquatic mammals Beaver, Dolphin, Otter, Seal, Whale Fish aquarium fish, flatfish, ray, shark, trout 4 Small animals fox, porcupine, possum, raccoon, skunk Large animals bear, leopard, lion, tiger, wolf Full CIFAR100 Split: animate = beaver, dolphin, otter, seal, whale, aquarium fish, flatfish, ray, shark, trout, bear, leopard, lion, tiger, wolf, camel, cattle, chimpanzee, elephant, kangaroo, fox, porcupine, possum, raccoon, skunk, baby, boy, girl, man, woman, crocodile, dinosaur, lizard, snake, turtle, hamster, mouse, rabbit, shrew, squirrel, bee, beetle, butterfly, caterpillar, cockroach, crab, lobster, snail, spider, worm inanimate = orchid, poppy, rose, sunflower, tulip, bottle, bowl, can, cup, plate, apple, mushroom, orange, pear, sweet pepper, clock, keyboard, lamp, telephone, television, bed, chair, couch, table, wardrobe, bridge, castle, house, road, skyscraper, cloud, forest, mountain, plain, sea, maple tree, oak tree, palm tree, pine tree, willow tree, bicycle, bus, motorcycle, pickup truck, train, lawn mower, rocket, streetcar, tank, tractor K.3 SPLITIMAGENET ImageNet Dataset Splits Split Group Classes 1 Household Devices analog clock, digital clock, wall clock, computer keyboard,dial telephone, table lamp, television, cellular telephone Furniture studio couch, dining table, wardrobe, folding chair 2 Aquatic mammals Beaver, Dolphin, Otter, Seal, Whale Fish barracouta, eel, coho, rock beauty, anemone fish,sturgeon, gar, puffer, lionfish 3 Devices 1 mountain bike, bicycle-built-for-two,school bus, moped,tricycle, bullet train, passenger car, pickup Devices 2 lawn mower, tractor, streetcar, tank
1. What is the focus and contribution of the paper on visual recognition algorithms? 2. What are the strengths of the proposed architecture and ideas? 3. What are the weaknesses and questions regarding the paper's content, particularly in terms of clarity, quality, novelty, and reproducibility? 4. How does the reviewer assess the significance of the proposed algorithm (mid-vision feedback, MVF) in object recognition tasks? 5. Do you have any concerns about the paper's experimental design, methodology, or results?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This is an interesting study that proposes novel ideas on how to incorporate contextual cues in the form of category-specific feedback signals into visual recognition algorithms. The proposed algorithm (mid-vision feedback, MVF) outperforms several baseline models in several relevant benchmark object recognition tasks. More generally, this paper introduces plausible mechanisms by which feedback signals (here in the form of expectations) could be incorporated into the traditional architectures for processing visual information. Strengths And Weaknesses Strengths The proposed architecture is quite simple and therefore elegant. The proposed ideas about how to incorporate feedback are rather generic and can be directly applied not just for any one specific dataset or task, but rather across multiple different problems including cases of no contextual cues, cases where contextual cues are incongruent or unusual, cases where contextual cues are provided by task demands or by statistical correlations previously learned from images. Weaknesses Several of the points below are questions, rather than weaknesses. It is true that the majority of connections are feedback rather than feedforward, but not because of the work of Kveraga et al. In humans, we do not know any details about anatomical connectivity. Probably the best work making this point is Markov et al Cerebral Cortex 2014. I find Figure 1 to be somewhat confusing. First, the authors state that they cropped the images to exclude context. But then they end up arguing that understanding the difference of context can help appreciate the low-level feature differences. Which one is it, is there context in these images or not? And does it help or not? Figure 2. What are the injection sites? What are the big blue and red “A” symbols, are those the affine transformations? Are L_O and L_0 the same thing? Where do the high-level contextual expectations come from? What are the x-axis and y-axis in Figure 5? While this is not the main point of this study, why is it that ViT generally performs below VGG in Tables 3 and 4? The rationale for having two stages in training is not clearly explained. Why two stages as opposed to a single stage of training? Related to the question above regarding Figure 2, what decides which representations should be orthogonalized? In the example of desk and horse, who in the model says that there should correspond to two different contexts and should be orthogonalized? Clarity, Quality, Novelty And Reproducibility The presentation format is rather strange, with a dump of figures and tables without any clear text accompanying them. Most of the tables and figures are not even cited, except for single lines in a 4-sentence discussion. Most of the paper is devoted to explaining MVF and basically a single page is devoted to the results, which is mostly a dump of tables and figures without any description or with minimal description.
ICLR
Title On the Margin Theory of Feedforward Neural Networks Abstract Past works have shown that, somewhat surprisingly, over-parametrization can help generalization in neural networks. Towards explaining this phenomenon, we adopt a margin-based perspective. We establish: 1) for multi-layer feedforward relu networks, the global minimizer of a weakly-regularized cross-entropy loss has the maximum normalized margin among all networks, 2) as a result, increasing the over-parametrization improves the normalized margin and generalization error bounds for deep networks. In the case of two-layer networks, an infinite-width neural network enjoys the best generalization guarantees. The typical infinite feature methods are kernel methods; we compare the neural net margin with that of kernel methods and construct natural instances where kernel methods have much weaker generalization guarantees. We validate this gap between the two approaches empirically. Finally, this infinite-neuron viewpoint is also fruitful for analyzing optimization. We show that a perturbed gradient flow on infinite-size networks finds a global optimizer in polynomial time. 1 INTRODUCTION In deep learning, over-parametrization refers to the widely-adopted technique of using more parameters than necessary (Krizhevsky et al., 2012; Livni et al., 2014). Both computationally and statistically, over-parametrization is crucial for learning neural nets. Controlled experiments demonstrate that over-parametrization eases optimization by smoothing the non-convex loss surface (Livni et al., 2014; Sagun et al., 2017). Statistically, increasing model size without any regularization still improves generalization even after the model interpolates the data perfectly (Neyshabur et al., 2017b). This is surprising given the conventional wisdom on the trade-off between model capacity and generalization. In the absence of an explicit regularizer, algorithmic regularization is likely the key contributor to good generalization. Recent works have shown that gradient descent finds the minimum norm solution fitting the data for problems including logistic regression, linearized neural networks, and matrix factorization (Soudry et al., 2018; Gunasekar et al., 2018b; Li et al., 2018; Gunasekar et al., 2018a; Ji & Telgarsky, 2018). Many of these proofs require a delicate analysis of the algorithm’s dynamics, and some are not fully rigorous due to assumptions on the iterates. To the best of our knowledge, it is an open question to prove analogous results for even two-layer relu networks. (For example, the technique of Li et al. (2018) on two-layer neural nets with quadratic activations still falls within the realm of linear algebraic tools, which apparently do not suffice for other activations.) We propose a different route towards understanding generalization: making the regularization explicit. The motivations are: 1) with an explicit regularizer, we can analyze generalization without fully understanding optimization; 2) it is unknown whether gradient descent provides additional implicit regularization beyond what `2 regularization already offers; 3) on the other hand, with a sufficiently weak `2 regularizer, we can prove stronger results that apply to multi-layer relu networks. Additionally, explicit regularization is perhaps more relevant because `2 regularization is typically used in practice. Concretely, we add a norm-based regularizer to the cross entropy loss of a multi-layer feedforward neural network with relu activations. We show that the global minimizer of the regularized objective achieves the maximum normalized margin among all the models with the same architecture, if the regularizer is sufficiently weak (Theorem 2.1). Informally, for models with norm 1 that perfectly classify the data, the margin is the smallest difference across all datapoints between the classifier score for the true label and the next best score. We are interested in normalized margin because its inverse bounds the generalization error (see recent work (Bartlett et al., 2017; Neyshabur et al., 2017a; 2018; Golowich et al., 2017) or Proposition 3.1). Our work explains why optimizing the training loss can lead to parameters with a large margin and thus, better generalization error (see Corollary 3.2). We further note that the maximum possible margin is non-decreasing in the width of the architecture, and therefore the generalization bound of Corollary 3.2 can only improve as the size of the network grows (see Theorem 3.3). Thus, even if the dataset is already separable, it could still be useful to increase the width to achieve larger margin and better generalization. At a first glance, it might seem counterintuitive that decreasing the regularizer is the right approach. At a high level, we show that the regularizer only serves as a tiebreaker to steer the model towards choosing the largest normalized margin. Our proofs are simple, oblivious to the optimization procedure, and apply to any norm-based regularizer. We also show that an exact global minimum is unnecessary: if we approximate the minimum loss within a constant factor, we obtain the max-margin within a constant factor (Theorem 2.2). To better understand the neural network max-margin, in Section 4 we compare the max-margin two-layer network obtained by optimizing both layers jointly to kernel methods corresponding to fixing random weights for the hidden layer and solving a 2-norm max-margin on the top layer. We design a simple data distribution (Figure 1) where neural net margin is large but the kernel margin is small. This translates to an Ω( √ d) factor gap between the generalization error bounds for the two approaches and demonstrates the power of neural nets compared to kernel methods. We experimentally confirm that a gap does indeed exist. In the setting of two-layer networks, we also study how over-parametrization helps optimization. Prior works (Mei et al., 2018; Chizat & Bach, 2018; Sirignano & Spiliopoulos, 2018; Rotskoff & Vanden-Eijnden, 2018) show that gradient descent on two-layer networks becomes Wasserstein gradient flow over parameter distributions in the limit of infinite neurons. For this setting, we prove that perturbed Wasserstein gradient flow finds a global optimizer in polynomial time. Finally, we empirically validate several claims made in this paper. First, we confirm that neural networks do generalize better than kernel methods. Second, we show that for two-layer networks, the test error decreases and margin increases as the hidden layer grows, as predicted by our theory. 1.1 ADDITIONAL RELATED WORK Zhang et al. (2016) and Neyshabur et al. (2017b) show that neural network generalization defies conventional explanations and requires new ones. Neyshabur et al. (2014) initiate the search for the “inductive bias” of neural networks towards solutions with good generalization. Recent papers (Hardt et al., 2015; Brutzkus et al., 2017; Chaudhari et al., 2016) study inductive bias through training time and sharpness of local minima. Neyshabur et al. (2015a) propose a new steepest descent algorithm in a geometry invariant to weight rescaling and show that this improves generalization. Morcos et al. (2018) relate generalization in deep nets to the number of “directions” in the neurons. Other papers (Gunasekar et al., 2017; Soudry et al., 2018; Nacson et al., 2018; Gunasekar et al., 2018b; Li et al., 2018; Gunasekar et al., 2018a) study implicit regularization towards a specific solution. Ma et al. (2017) show that implicit regularization can help gradient descent avoid overshooting optima. Rosset et al. (2004a;b) study logistic regression with a weak regularization and show convergence to the max margin solution. We adopt their techniques and extend their results. A line of work initiated by Neyshabur et al. (2015b) has focused on deriving tighter norm-based Rademacher complexity bounds for deep neural networks (Bartlett et al., 2017; Neyshabur et al., 2017a; Golowich et al., 2017) and new compression based generalization properties (Arora et al., 2018b). Dziugaite & Roy (2017) manage to compute non-vacuous generalization bounds from PAC-Bayes bounds. Neyshabur et al. (2018) investigate the Rademacher complexity of two-layer networks and propose a bound that is decreasing with the distance to initialization. Liang & Rakhlin (2018) and Belkin et al. (2018) study the generalization of kernel methods. On the optimization side, Soudry & Carmon (2016) explain why over-parametrization can remove bad local minima. Safran & Shamir (2016) show that over-parametrization can improve the quality of the random initialization. Haeffele & Vidal (2015), Nguyen & Hein (2017), and Venturi et al. (2018) show that for sufficiently overparametrized networks, all local minima are global, but do not show how to find these minima via gradient descent. Du & Lee (2018) show that for two-layer networks with quadratic activations, all second-order stationary points are global minimizers. Arora et al. (2018a) interpret over-parametrization as a means of implicit acceleration during optimization. Mei et al. (2018), Chizat & Bach (2018), and Sirignano & Spiliopoulos (2018) take a distributional view of over-parametrized networks. Chizat & Bach (2018) show that Wasserstein gradient flow converges to global optimizers under structural assumptions. We extend this to a polynomial-time result. 1.2 NOTATION Let R denote the set of real numbers. We will use ‖·‖ to indicate a general norm, with ‖·‖1, ‖·‖2, ‖·‖∞ denoting the `1, `2, `∞ norms on finite dimensional vectors, respectively, and ‖ · ‖F denoting the Frobenius norm on a matrix. In general, we use ¯ on top of a symbol to denote a unit vector: when applicable, ū , u/‖u‖, where the norm ‖ · ‖ will be clear from context. Let Sd−1 , {ū ∈ Rd : ‖ū‖2 = 1} be the unit sphere in d dimensions. Let Lp(Sd−1) be the space of functions on Sd−1 for which the p-th power of the absolute value is Lebesgue integrable. For α ∈ Lp(Sd−1), we overload notation and write ‖α‖p , (∫ Sd−1 |α(ū)| pdū )1/p . Additionally, for α1 ∈ L1(Sd−1) and α2 ∈ L∞(Sd−1) or α1, α2 ∈ L2(Sd−1), we can define 〈α1, α2〉 , ∫ Sd−1 α1(ū)α2(ū)dū < ∞. Furthermore, we will use Vol(Sd−1) , ∫ Sd−1 1dū. Throughout this paper, we reserve the symbol X = [x1, . . . , xn] to denote the collection of datapoints (as a matrix), and Y = [y1, . . . , yn] to denote labels. We use d to denote the dimension of our data. We often use Θ to denote the parameters of a prediction function f , and f(Θ;x) to denote the prediction of f on datapoint x. We will use the notation .,& to mean less than or greater than up to a universal constant, respectively. Unless stated otherwise, O(·),Ω(·) denote some universal constant in upper and lower bounds, respectively. The notation poly denotes a universal constant-degree polynomial in the arguments. 2 WEAK REGULARIZER GUARANTEES MAX MARGIN SOLUTIONS In this section, we will show that when we add a weak regularizer to cross-entropy loss with a positive-homogeneous prediction function, the normalized margin of the optimum converges to some max-margin solution. As a concrete example, feedforward relu networks are positive-homogeneous. Let l be the number of labels, so the i-th example has label yi ∈ [l]. We work with a family F of prediction functions f(Θ; ·) : Rd → Rl that are a-positive-homogeneous in their parameters for some a > 0: f(cΘ;x) = caf(Θ;x),∀c > 0. We additionally require that f is continuous in Θ. For some general norm ‖ · ‖, we study the λ-regularized cross-entropy loss Lλ, defined as Lλ(Θ) , n∑ i=1 − log exp(fyi(Θ;xi))∑l j=1 exp(fj(Θ;xi)) + λ‖Θ‖r (2.1) for fixed r > 0. Let Θλ ∈ arg minLλ(Θ).1 We define the normalized margin of Θλ as: γλ , min i ( fyi(Θ̄λ;xi)−max j 6=yi fj(Θ̄λ;xi) ) (2.2) Define the ‖ · ‖-max normalized margin as γ? , max ‖Θ‖≤1 [ min i ( fyi(Θ;xi)−max j 6=yi fj(Θ;xi) )] and let Θ? be a parameter achieving this maximum. We show that with sufficiently small regularization level λ, the normalized margin γλ approaches the maximum margin γ?. Our theorem and proof are inspired by the result of Rosset et al. (2004a;b), who analyze the special case when f is a linear predictor. In contrast, our result can be applied to non-linear f as long as f is homogeneous. Theorem 2.1. Assume the training data is separable by a network f(Θ?; ·) ∈ F with an optimal normalized margin γ? > 0. Then, the normalized margin of the global optimum of the weaklyregularized objective (equation 2.1) converges to γ? as the strength of the regularizer goes to zero. Mathematically, let γλ be defined in equation 2.2. Then γλ → γ? as λ→ 0 1We formally show that Lλ has a minimizer in Claim A.1 of Section A. An intuitive explanation for our result is as follows: because of the homogeneity, the loss L(Θλ) roughly satisfies the following (for small λ, and ignoring problem parameters such as n): Lλ(Θλ) ≈ exp(−‖Θλ‖aγλ) + λ‖Θλ‖r Thus, the loss selects parameters with larger margin, while the regularization favors parameters with a smaller norm. The full proof of the theorem is deferred to Section A.1. Theorem 2.1 applies to feedforward relu networks and states that global minimizers of the weaklyregularized loss will obtain a maximum margin among all networks of the given architecture. By considering global minimizers, Theorem 2.1 provides a framework for directly analyzing generalization properties of the solution without considering details of the optimization algorithm. In Section 3 we leverage this framework and existing generalization bounds (Golowich et al., 2017) to provide a clean argument that over-parameterization can improve generalization. We can also provide an analogue of Theorem 2.1 for the binary classification setting. For this setting, our prediction is now a single real output and we train using logistic loss. We provide formal definitions and results in Section A.2. Our study of the generalization properties of the max-margin (see Section 3 and Section 4) is based in this setting. 2.1 OPTIMIZATION ACCURACY Since Lλ is typically hard to optimize exactly for neural nets, we study how accurately we need to optimize Lλ to obtain a margin that approximates γ? up to a constant. The following theorem shows that it suffices to find Θ′ achieving a constant factor multiplicative approximation of Lλ(Θλ), where λ is some sufficiently small polynomial in n, l, γ?. Though our theorem is stated for the general multi-class setting, it also applies for binary classification. We provide the proof in Section A.3. Theorem 2.2. In the setting of Theorem 2.1, suppose that we choose λ = exp(−(2r/a − 1)−a/r) (γ ?)r/a nc(l − 1)c for sufficiently large c (that only depends on r/a). For β ≤ 2, let Θ′ denote a β-approximate minimizer of Lλ, so Lλ(Θ′) ≤ βLλ(Θλ). Denote the normalized margin of Θ′ by γ′. Then γ′ ≥ γ ? 10 · βa/r . 3 GENERALIZATION PROPERTIES OF A MAXIMUM MARGIN NEURAL NETWORK In Section 2 we showed that optimizing a weakly-regularized logistic loss leads to the maximum normalized margin. We now study the direct implications of this result on the generalization properties of the solution. Specifically, we use existing Rademacher complexity bounds of Golowich et al. (2017) to present a generalization bound that depends on the network architecture only through the inverse `2-normalized margin and depth of the network (see Proposition 3.1). Next, we combine this bound with Theorem 2.1 to conclude that parameters obtained by optimizing logistic loss with weak `2-regularization will have a generalization bound that scales with the inverse of the maximum possible margin and depth. Finally, we note that the maximum possible margin can only increase as the size of the network grows, which suggests that increasing the size of the network improves the generalization of the solution (see Theorem 3.3). We consider depth-K neural networks with 1-Lipschitz, 1-positive-homogeneous activation φ for K ≥ 2. Suppose that the collection of parameters Θ is given by matrices W1, . . . ,WK . The K-layer network will compute a real-valued score f(Θ;x) ,WKφ(WK−1φ(· · ·φ(W1x) · · · )) (3.1) where we overload notation to let φ(·) denote the element-wise application of the activation φ. Let mi denote the size of the i-th hidden layer, so W1 ∈ Rm1×d,W2 ∈ Rm2×m1 , · · · ,WK ∈ R1×mK−1 . We will letM , (m1, . . . ,mK−1) denote the sequence of hidden layer sizes. We will focus on `2-regularized loss. The weakly-regularized logistic loss of the depth-K architecture with hidden layer sizesM is therefore Lλ,M(Θ) , 1 n n∑ i=1 log(1 + exp(−yif(Θ;xi))) + λ‖Θ‖2F (3.2) We note that f is K-homogeneous in Θ, so the results of Section 2 apply to Lλ,M.2 Following our conventions from Section 2, we denote the optimizer of Lλ,M by Θλ,M, the normalized margin of Θλ,M by γλ,M, the max-margin solution by Θ?,M, and the max-margin by γ?,M. Our notation emphasizes the architecture of the network. Since the classifier f now predicts a single real value, we need to redefine γλ,M , min i yif(Θ̄λ,M;xi) γ?,M , max ‖Θ‖2≤1 min i yif(Θ;xi) When the data is not separable by a neural network with architectureM, we define γ?,M to be zero. Recall that X = [x1, . . . , xn] denotes the matrix with all the data points as columns, and Y = [y1, . . . , yn] denotes the labels. We sample X and Y i.i.d. from the data generating distribution pdata, which is supported on X × {−1,+1}. We can define the population 0-1 loss and training 0-1 loss of the network parametrized by Θ by L(Θ) = Pr (x,y)∼pdata [yf(Θ;x) ≤ 0] Let C , supx∈X ‖x‖2 be an upper bound on the norm of a single datapoint. Proposition 3.1 shows that the generalization error only depends on the parameters through the inverse of the margin on the training data. We obtain Proposition 3.1 by applying Theorem 1 of Golowich et al. (2017) with the standard technique of using margin loss to bound classification error. There exist other generalization bounds which depend on the margin and some normalization (Neyshabur et al., 2015b; 2017a; Bartlett et al., 2017; Neyshabur et al., 2018); we choose the bounds of Golowich et al. (2017) because they fit well with `2 normalization. In the two-layer case K = 2, the bound below also follows from Neyshabur et al. (2015b). Proposition 3.1. [Straightforward consequence of Golowich et al. (2017, Theorem 1)] Suppose φ is 1-Lipschitz and 1-positive-homogeneous. For any depth-K network f(Θ; ·) separating the data with normalized margin γ , mini yif(Θ̄;xi) > 0, with probability at least 1− δ over the draw of X,Y , L(Θ) . C γK(K−1)/2 √ n + (γ) (3.3) where (γ) , √ log log2 4C γ n + √ log(1/δ) n . Note that (γ) is typically small, and thus the above bound mainly scales with C γK(K−1)/2 √ n . 3 For completeness, we state the proof in Section C.1. By combining this bound with our Theorem 2.1 we can conclude that optimizing weakly-regularized logistic loss gives us generalization error bounds that depend on the maximum possible margin of a network with the given architecture. Corollary 3.2. In the setting of Proposition 3.1, with probability 1− δ, lim sup λ→0 L(Θλ,M) . C γ?,MK(K−1)/2 √ n + (γ?,M) (3.4) where (γ) is defined as in Proposition 3.1. Above we implicitly assume γ?,M > 0, since otherwise the right hand side of the bound is vacuous. 2Although Theorem 2.1 is written in the language of multi-class prediction where the classifier outputs l ≥ 2 scores, the results translate to single-output binary classification. See Section A.2. 3Although the 1 K(K−1)/2 factor of equation 3.3 decreases with depth K, the margin γ will also tend to decrease as the constraint ‖Θ̄‖F ≤ 1 becomes more stringent. By applying Theorem 2.2 with Proposition 3.1, we can also conclude that optimizing Lλ,M within a constant factor gives a margin, and therefore generalization bound, approximating the best possible. One consequence of Corollary 3.2 is that optimizing weakly-regularized logistic loss results in the best possible generalization bound out of all models with the given architecture. This indicates that the widely used algorithm of optimizing deep networks with `2-regularized logistic loss has an implicit bias towards solutions with good generalization. Next, we observe that the maximum normalized margin is non-decreasing with the size of the architecture. Formally, for two depth-K architecturesM = (m1, . . . ,mK−1) andM′ = (m′1, . . . ,m′K−1), we sayM ≤ M′ if mi ≤ m′i ∀i = 1, . . .K − 1. Theorem 3.3 states that ifM ≤ M′, then the max-margin over networks with architecture M′ is at least the max-margin over networks with architectureM. Theorem 3.3. Recall that γ?,M denotes the maximum normalized margin of a network with architectureM. IfM≤M′, we have γ?,M ≤ γ?,M′ . As a important consequence, the generalization error bound of Corollary 3.2 forM′ is at least as good as that forM. This theorem is simple to prove and follows because we can directly implement any network of architectureM using one of architectureM′, ifM ≤M′. This can explain why additional overparameterization has been empirically observed to improve generalization in two-layer networks (Neyshabur et al., 2017b): the margin does not decrease with a larger network size, and therefore Corollary 3.2 gives a better generalization bound. In Section 6, we provide empirical evidence that the test error decreases with larger network size while the margin is non-decreasing. The phenomenon in Theorem 3.3 contrasts with standard `2-normalized linear prediction. In this setting, adding more features increases the norm of the data, and therefore the generalization error bounds could also increase. On the other hand, Theorem 3.3 shows that adding more neurons (which can be viewed as learned features) can only improve the generalization of the max-margin solution. 4 NEURAL NET MAX-MARGIN VS. KERNEL METHODS We will continue our study of the max-margin neural network via comparison against kernel methods, a context in which margins have already been extensively studied. We show that two-layer networks can obtain a larger margin, and therefore better generalization guarantees, than kernel methods. Our comparison between the two methods is motivated by an equivalence between the `2 max-margin of an infinite-width two-layer network and the `1-SVM (Zhu et al., 2004) over the lifted feature space defined by the activation function applied to all possible hidden units (Neyshabur et al., 2014; Rosset et al., 2007; Bengio et al., 2006). The kernel method corresponds to the `2-SVM in this same feature space, and is equivalent to fixing random hidden layer weights and solving an `2-SVM over the top layer. In Theorem 4.3, we construct a distribution for which the generalization upper bounds for the `1-SVM on this feature space are smaller than those for the `2-SVM by a Ω( √ d) factor. Our work provides evidence that optimizing all layers of a network can be beneficial for generalization. There have been works that compare `1 and `2-regularized solutions in the context of feature selection and construct a feature space for which a generalization gap exists (e.g., see Ng (2004)). In contrast, we work in the fixed feature space of relu activations, which makes our construction particularly challenging. We will usem to denote the width of the single hidden layer of the network. Following the convention from Section 3, we will use γ?,m to denote the maximum possible normalized margin of a two-layer network with hidden layer size m (note the emphasis on the size of the single hidden layer). The depth K = 2 case of Corollary 3.2 immediately implies that optimizing weakly-regularized `2 loss over width-m two-layer networks gives parameters whose generalization upper bounds depend on the hidden layer size only through 1/γ?,m. Furthermore, from Theorem 3.3 it immediately follows that γ?,1 ≤ γ?,2 ≤ · · · ≤ γ?,∞ The work of Neyshabur et al. (2014) links γ?,m to the `1 SVM over a lifted space. Formally, we define a lifting function ϕ : Rd → L∞(Sd−1) mapping data to an infinite feature vector: x ∈ Rd → ϕ(x) ∈ L∞(Sd−1) satisfying ϕ(x)[ū] = φ(ū>x) (4.1) where φ is the activation of Section 3. We look at the margin of linear functionals corresponding to α ∈ L1(Sd−1) . The 1-norm SVM (Zhu et al., 2004) over the lifted feature ϕ(x) solves for the maximum margin: γ`1 ,max α min i∈[n] yi〈α,ϕ(xi)〉 subject to ‖α‖1 ≤ 1 (4.2) where we rely on the inner product and 1-norm defined in Section 1.2. This formulation is equivalent to a hard-margin optimization on “convex neural networks” (Bengio et al., 2006). Bach (2017) also study optimization and generalization of convex neural networks. Using results from Rosset et al. (2007); Neyshabur et al. (2014); Bengio et al. (2006), our Theorem 2.1 implies that optimizing weaklyregularized logistic loss over two-layer networks is equivalent to solving equation 4.2 when the size of the hidden layer is at least n + 1, where n is the number of training examples. Proposition 4.1 essentially restates this with the minor improvement that this equivalence4 also holds when the size of the hidden layer is n. Proposition 4.1. Let γ`1 be defined in equation 4.2. Then γ`1 2 = γ ?,n = · · · = γ?,∞. For completeness, we prove Proposition 4.1 in Section B, relying on the work of Tibshirani (2013) and Rosset et al. (2004a). Importantly, the `1-max margin on the lifted feature space is obtainable by optimizing a finite neural network. We compare this to the `2 margin attainable via kernel methods. Following the setup of equation 4.2, we define the kernel problem over α ∈ L2(Sd−1): γ`2 ,max α min i∈[n] yi〈α,ϕ(xi)〉 subject to √ κ‖α‖2 ≤ 1 (4.3) where κ , Vol(Sd−1). (We scale ‖α‖2 by √ κ to make the lemma statement below cleaner.) First, γ`2 can be used to obtain a standard upper bound on the generalization error of the kernel SVM. Following the notation of Section 3, we will let L`2-svm denote the 0-1 population classification error for the optimizer of equation 4.3. Lemma 4.2. In the setting of Proposition 3.1, with probability at least 1−δ, the generalization error of the standard kernel SVM with relu feature (defined in equation 4.3) is bounded by L`2-svm . C γ`2 √ dn + `2 (4.4) where `2 , √ log max { log2 C√ dγ`2 ,2 } n + √ log(1/δ) n is typically a lower-order term. The bound above follows from standard techniques (Bartlett & Mendelson, 2002), and we provide a full proof in Section C.2. We construct a data distribution for which this lemma does not give a good bound for kernel methods, but Corollary 3.2 does imply good generalization for two-layer networks. Theorem 4.3. There exists a data distribution pdata such that the `1 SVM with relu features has a good margin: γ`1 & 1 and with probability 1− δ over the choice of i.i.d. samples from pdata, obtains generalization error L`1-svm . √ d log n n + `1 where `1 , √ log(1/δ) n is typically a lower order term. Meanwhile, with high probability the `2 SVM has a small margin: γ`2 . max {√ logn n , 1/d } and therefore the generalization upper bound from 4The factor of 1 2 is due the the relation that every unit-norm parameter Θ corresponds to an α in the lifted space with ‖α‖ = 2. Lemma 4.2 is at least Ω ( min { 1, d √ log n n }) In particular, the `2 bound is larger than the `1 bound by a Ω( √ d) factor. Although Theorem 4.3 compares upper bounds, our construction highlights properties of distributions which result in better neural network generalization than kernel method generalization. Furthermore, in Section 6 we empirically validate the gap in generalization between the two methods. We briefly overview the construction of pdata here. The full proof is in Section D.1. Proof sketch for Theorem 4.3. We base pdata on the distribution D of examples (x, y) described below. Here ei is the i-th standard basis vector and we use x>ei to represent the i-coordinate of x (since the subscript is reserved to index training examples).e > 3 x ... e>d x ∼ N (0, Id−2), and y = +1, x>e1 = +1, x >e2 = +1 w/ prob. 1/4 y = +1, x>e1 = −1, x>e2 = −1 w/ prob. 1/4 y = −1, x>e1 = +1, x>e2 = −1 w/ prob. 1/4 y = −1, x>e1 = −1, x>e2 = +1 w/ prob. 1/4 Figure 1 shows samples from D when there are 3 dimensions. From the visualization, it is clear that there is no linear separator for D. As Lemma D.1 shows, a relu network with four neurons can fit this relatively complicated decision boundary. On the other hand, for kernel methods, we prove that the symmetries in D induce cancellation in feature space. As a result, the features are less predictive of the true label and the margin will therefore be small. We formalize this argument in Section D.1. Gap in regression setting: We are able to prove an even larger Ω( √ n/d) gap between neural networks and kernel methods in the regression setting where we wish to interpolate continuous labels. Analogously to the classification setting, optimizing a regularized squared error loss on neural networks is equivalent to solving a minimum 1-norm regression problem (see Theorem D.5). Furthermore, kernel methods correspond to a minimum 2-norm problem. We construct distributions pdata where the 1-norm solution will have a generalization error bound of O( √ d/n), whereas the 2- norm solution will have a generalization error bound that is Ω(1) and thus vacuous. In Section D.2, we define the 1-norm and 2-norm regression problems. In Theorem D.10 we formalize our construction. 5 PERTURBED WASSERSTEIN GRADIENT FLOW FINDS GLOBAL OPTIMIZERS IN POLYNOMIAL TIME In the prior section, we studied the limiting behavior of the generalization of a two-layer network as its width goes to infinity. In this section, we will now study the limiting behavior of the optimization algorithm, gradient descent. Prior work (Mei et al., 2018; Chizat & Bach, 2018) has shown that as the hidden layer size grows to infinity, gradient descent for a finite neural network approaches the Wasserstein gradient flow over distributions of hidden units (defined in equation 5.1). Chizat & Bach (2018) assume the gradient flow converges, a non-trivial assumption since the space of distributions is infinite-dimensional, and given the assumption prove that Wasserstein gradient flow converges to a global optimizer in this setting, but do not specify a convergence rate. Mei et al. (2018) show global convergence for the infinite-neuron limit of stochastic Langevin dynamics, but also do not provide a convergence rate. We show that a perturbed version of Wasserstein gradient flow converges in polynomial time. The informal take-away of this section is that a perturbed version of gradient descent converges in polynomial time on infinite-size neural networks (for the right notion of infinite-size.) Formally, we optimize the following functional over distributions ρ on Rd+1: L[ρ] , R (∫ Φdρ ) + ∫ V dρ where Φ : Rd+1 → Rk, R : Rk → R, and V : Rd+1 → R. In this work, we consider 2-homogeneous Φ and V . We will additionally require that R is convex and nonnegative and V is positive on the unit sphere. Finally, we need standard regularity assumptions on R,Φ, and V : Assumption 5.1 (Regularity conditions on Φ, R, V ). Φ and V are differentiable as well as upper bounded and Lipschitz on the unit sphere. R is Lipschitz and its Hessian has bounded operator norm. We provide more details on the specific parameters (for boundedness, Lipschitzness, etc.) in Section E.1. We note that relu networks satisfy every condition but differentiability of Φ.5 We can fit a neural network under our framework as follows: Example 5.2 (Logistic loss for neural networks). We interpret ρ as a distribution over the parameters of the network. Let k , n and Φi(θ) , wφ(u>xi) for θ = (w, u). In this case, ∫ Φdρ is a distributional neural network that computes an output for each of the n training examples (like a standard neural network, it also computes a weighted sum over hidden units). We can compute the distributional version of the regularized logistic loss in equation 3.2 by setting V (θ) , λ‖θ‖22 and R(a1, . . . , an) , ∑n i=1 log(1 + exp(−yiai)). We will define L′[ρ] : Rd+1 → R with L′[ρ](θ) , 〈R′( ∫ Φdρ),Φ(θ)〉 + V (θ) and v[ρ](θ) , −∇θL′[ρ](θ). Informally, L′[ρ] is the gradient of L with respect to ρ, and v is the induced velocity field. For the standard Wasserstein gradient flow dynamics, ρt evolves according to d dt ρt = −∇ · (v[ρt]ρt) (5.1) where ∇· denotes the divergence of a vector field. For neural networks, these dynamics formally define continuous-time gradient descent when the hidden layer has infinite size (see Theorem 2.6 of Chizat & Bach (2018), for instance). We propose the following modification of the Wasserstein gradient flow dynamics: d dt ρt = −σρt + σUd −∇ · (v[ρt]ρt) (5.2) where Ud is the uniform distribution on Sd. In our perturbed dynamics, we add very small uniform noise over Ud, which ensures that at all time-steps, there is sufficient mass in a descent direction for the algorithm to decrease the objective. For infinite-size neural networks, one can informally interpret this as re-initializing a very small fraction of the neurons at every step of gradient descent. We prove convergence to a global optimizer in time polynomial in 1/ , d, and the regularity parameters. Theorem 5.3 (Theorem E.4 with regularity parameters omitted). Suppose that Φ and V are 2- homogeneous and the regularity conditions of Assumption 5.1 are satisfied. Also assume that from starting distribution ρ0, a solution to the dynamics in equation 5.2 exists. Define L? , infρ L[ρ]. Let > 0 be a desired error threshold and choose σ , exp(−d log(1/ )poly(k, L[ρ0]− L?)) and t , d 2 4 poly(log(1/ ), k, L[ρ0]− L ?), where the regularity parameters for Φ, V , and R are hidden in the poly(·). Then, perturbed Wasserstein gradient flow converges to an -approximate global minimum in t time: min 0≤t≤t L[ρt]− L? ≤ . We provide a theorem statement that includes regularity parameters in Section E.1. We prove the theorem in Section E.2. As a technical detail, Theorem 5.3 requires that a solution to the dynamics exists. We can remove this assumption by analyzing a discrete-time version of equation 5.2: ρt+1 , ρt + η(−σρt + σUd −∇ · (v[ρt]ρt)) and additionally assuming Φ and V have Lipschitz gradients. In this setting, a polynomial time convergence result also holds. We state the result in Section E.3. An implication of our Theorem 5.3 is that for infinite networks, we can optimize the weaklyregularized logistic loss in time polynomial in the problem parameters and λ−1. By Theorem 2.2, we only require λ−1 = poly(n) to approximate the maximum margin within a constant factor. Thus, for infinite networks, we can approximate the max margin within a constant factor in polynomial time. 5The relu activation is non-differentiable at 0 and hence the gradient flow is not well-defined. Chizat & Bach (2018) acknowledge this same difficulty with relu. 6 SIMULATIONS We first compare the generalization of neural networks and kernel methods for classification and regression. In Figure 2 we plot the generalization error and predicted generalization upper bounds6 of a trained neural network against a `2 kernel method with relu features as we vary n. Our data comes from a synthetic distribution generated by a neural network with 6 hidden units; we provide a detailed setup in Section F.1. For classification we plot 0-1 error, whereas for regression we plot squared error. The variance in the neural network generalization bound for classification likely occured because we did not tune learning rate and training time, so the optimization failed to find the best margin. The plots show that two-layer networks clearly outperform kernel methods in test error as n grows. However, there seems to be looseness in the bounds: the kernel generalization bound appears to stay constant with n (as predicted by our theory for regression), but the test error decreases. We also plot the dependence of the test error and margin on the hidden layer size in Figure 3 for synthetic data generated from a ground truth network with 10 hidden units and also MNIST. The plots indicate that test error is decreasing in hidden layer size while margin is increasing, as Theorem 3.3 predicts. We provide more details on the experimental setup in Section F.2. In Section F.3, we verify the convergence of a simple neural network to the max-margin solution as regularization decreases. In Section F.4, we train modified WideResNet architectures on CIFAR10 and CIFAR100. Although ResNet is not homogeneous, we still report improvements in generalization from annealing the weight decay during training, versus staying at a fixed decay rate. 7 CONCLUSION We have made the case that maximizing margin is one of the inductive biases of relu networks obtained from optimizing weakly-regularized cross-entropy loss. Our framework allows us to directly analyze generalization properties of the network without considering the optimization algorithm used to obtain it. Using this perspective, we provide a simple explanation for why over-parametrization can improve generalization. It is a fascinating question for future work to characterize other generalization properties of the max-margin solution. On the optimization side, we make progress towards understanding over-parametrized gradient descent by analyzing infinite-size neural networks. A natural direction for future work is to apply our theory to optimize the margin of finite-sized neural networks. 6We compute the leading term that is linear in the norm or inverse margin from the bounds in Proposition 3.1 and Lemmas 4.2, D.8, and D.9. A MISSING PROOFS IN SECTION 2 We first show that Lλ does indeed have a global minimizer. Claim A.1. In the setting of Theorems 2.1 and A.3, arg minΘ Lλ(Θ) exists. Proof. We will argue in the setting of Theorem 2.1 where Lλ is the multi-class cross entropy loss, because the logistic loss case is analogous. We first note that Lλ is continuous in Θ because f is continuous in Θ and the term inside the logarithm is always positive. Next, define b , infΘ Lλ(Θ) > 0. Then we note that for ‖Θ‖ > (b/λ)1/r , M , we must have Lλ(Θ) > b. It follows that inf‖Θ‖≤M Lλ(Θ) = infΘ Lλ(Θ). However, there must be a value Θλ which attains inf‖Θ‖≤M Lλ(Θ), because {Θ : ‖Θ‖ ≤ M} is a compact set and Lλ is continuous. Thus, infΘ Lλ(Θ) is attained by some Θλ. A.1 MISSING PROOFS FOR MULTI-CLASS SETTING Towards proving Theorem 2.1, we first show as we decrease λ, the norm of the solution ‖Θλ‖ grows. Lemma A.2. In the setting of Theorem 2.1, as λ→ 0, we have ‖Θλ‖ → ∞. To prove Theorem 2.1, we rely on the exponential scaling of the cross entropy: Lλ can be lower bounded roughly by exp(−‖Θλ‖γλ), but also has an upper bound that scales with exp(−‖Θλ‖γ?). By Lemma A.2, we can take large ‖Θλ‖ so the gap γ?−γλ vanishes. This proof technique is inspired by that of Rosset et al. (2004a). Proof of Theorem 2.1. For any M > 0 and Θ with γΘ , mini ( f(Θ̄;xi)−maxj 6=yi f(Θ̄;xi) ) , Lλ(MΘ) = 1 n n∑ i=1 − log exp(M afyi(Θ;xi))∑l j=1 exp(M afj(Θ;xi)) + λMr‖Θ‖r (by the homogeneity of f ) = 1 n n∑ i=1 − log 1 1 + ∑ j 6=yi exp(M a(fj(Θ;xi)− fyi(Θ;xi))) + λMr‖Θ‖r (A.1) ≤ log(1 + (l − 1) exp(−MaγΘ)) + λMr‖Θ‖r (A.2) We can also apply ∑ j 6=yi exp(M a(fj(Θ;xi) − fyi(Θ;xi))) ≥ max exp(Ma(fj(Θ;xi) − fyi(Θ;xi))) = exp γΘ in order to lower bound equation A.1 and obtain Lλ(MΘ) ≥ 1 n log(1 + exp(−MaγΘ)) + λMr‖Θ‖r (A.3) Applying equation A.2 with M = ‖Θλ‖ and Θ = Θ?, noting that ‖Θ?‖ ≤ 1, we have: Lλ(Θ ?‖Θλ‖) ≤ log(1 + (l − 1) exp(−‖Θλ‖aγ?)) + λ‖Θλ‖r (A.4) Next we lower bound Lλ(Θλ) by applying equation A.3, Lλ(Θλ) ≥ 1 n log(1 + exp(−‖Θλ‖aγλ)) + λ‖Θλ‖r (A.5) Combining equation A.4 and equation A.5 with the fact that Lλ(Θλ) ≤ Lλ(Θ?‖Θλ‖) (by the global optimality of Θλ), we have ∀λ > 0, n log(1 + (l − 1) exp(−‖Θλ‖aγ?)) ≥ log(1 + exp(−‖Θλ‖aγλ)) Recall that by Lemma A.2, as λ → 0, we have ‖Θλ‖ → ∞. Therefore, exp(−‖Θλ‖aγ?), exp(−‖Θλ‖aγλ) → 0. Thus, we can apply Taylor expansion to the equation above with respect to exp(−‖Θλ‖aγ?) and exp(−‖Θλ‖aγλ). If max{exp(−‖Θλ‖aγ?), exp(−‖Θλ‖aγλ)} < 1, then we obtain n(l − 1) exp(−‖Θλ‖aγ?) ≥ exp(−‖Θλ‖aγλ)−O(max{exp(−‖Θλ‖aγ?)2, exp(−‖Θλ‖aγλ)2}) We claim this implies that γ? ≤ lim infλ→0 γλ. If not, we have lim infλ→0 γλ < γ? , which implies that the equation above is violated with sufficiently large ‖Θλ‖ (‖Θλ‖ log(2(`− 1)n)1/a would suffice). By Lemma A.2, ‖Θλ‖ → ∞ as λ→ 0 and therefore we get a contradiction. Finally, we have γλ ≤ γ? by definition of γ?. Hence, limλ→0 γλ exists and equals γ?. Now we fill in the proof of Lemma A.2. Proof of Lemma A.2. For the sake of contradiction, we assume that ∃C > 0 such that for any λ0 > 0, there exists 0 < λ < λ0 with ‖Θλ‖ ≤ C. We will determine the choice of λ0 later and pick λ such that ‖Θλ‖ ≤ C. Then the logits (the prediction fj(Θ, xi) before softmax) are bounded in absolute value by some constant (that depends on C), and therefore the loss function − log exp(fyi (Θ;xi))∑l j=1 exp(fj(Θ;xi)) for every example is bounded from below by some constant D > 0 (depending on C but not λ.) Let M = λ−1/(r+1), we have that 0 < D ≤ Lλ(Θλ) ≤ Lλ(MΘ?) (by the optimality of Θλ) ≤ − log 1 1 + (l − 1) exp(−Maγ?) + λMr (by equation A.2) = log(1 + (l − 1) exp(−λ−a/(r+1)γ?)) + λ1/(r+1) ≤ log(1 + (l − 1) exp(−λ−a/(r+1)0 γ?)) + λ 1/(r+1) 0 Taking a sufficiently small λ0, we obtain a contradiction and complete the proof. A.2 FULL BINARY CLASSIFICATION SETTING For completeness, we state and prove our max-margin results for the setting where we fit binary labels yi ∈ {−1,+1} (as opposed to indices in [l]) and redefining f(Θ; ·) to assign a single real-valued score (as opposed to a score for each label). This lets us work with the simpler λ-regularized logistic loss: Lλ(Θ) , 1 n n∑ i=1 log(1 + exp(−yif(Θ;xi))) + λ‖Θ‖r As before, let Θλ ∈ arg minLλ(Θ), and define the normalized margin γλ by γλ , mini yif(Θ̄λ;xi). Define the maximum possible normalized margin γ? , max ‖Θ‖≤1 min i yif(Θ;xi) (A.6) Theorem A.3. Assume γ? > 0 in the binary classification setting with logistic loss. Then as λ→ 0, γλ → γ?. The proof follows via simple reduction to the multi-class case. Proof of Theorem A.3. We prove this theorem via reduction to the multi-class case with l = 2. Construct f̃ : Rd → R2 with f̃1(Θ;xi) = − 12f(Θ;xi) and f̃2(Θ;xi) = 1 2f(Θ;xi). Define new labels ỹi = 1 if yi = −1 and ỹi = 2 if yi = 1. Now note that f̃ỹi(Θ;xi)−f̃j 6=ỹi(Θ;xi) = yif(Θ;xi), so the multi-class margin for Θ under f̃ is the same as binary margin for Θ under f . Furthermore, defining L̃λ(Θ) , 1 n n∑ i=1 − log exp(f̃ỹi(Θ;xi))∑2 j=1 exp(f̃j(Θ;xi)) + λ‖Θ‖r we get that L̃λ(Θ) = Lλ(Θ), and in particular, L̃λ and Lλ have the same set of minimizers. Therefore we can apply Theorem 2.1 for the multi-class setting and conclude γλ → γ? in the binary classification setting. A.3 MISSING PROOF FOR OPTIMIZATION ACCURACY Proof of Theorem 2.2. Choose B , ( 1 γ? log (l−1)(γ?)r/a λ )1/a . We can upper bound Lλ(Θ′) by computing Lλ(Θ ′) ≤ βLλ(Θλ) ≤ βLλ(BΘ?) ≤ β log(1 + (l − 1) exp(−Baγ?)) + βλBr (by equation A.2) ≤ β(l − 1) exp(−Baγ?) + βλBr (using log(1 + x) ≤ x) ≤ β λ (γ?)r/a + βλ ( 1 γ? log (l − 1)(γ?)r/a λ )r/a ≤ β λ (γ?)r/a ( 1 + ( log (l − 1)(γ?)r/a λ )r/a) , L(UB) Furthermore, it holds that ‖Θ′‖r ≤ L (UB) λ . Now we note that Lλ(Θ ′) ≤ L(UB) ≤ 2β λ (γ?)r/a ( log (l − 1)(γ?)r/a λ )r/a ≤ 1 2n for sufficiently large c depending only on a/r. Now using the fact that log(x) ≥ x1+x ∀x ≥ −1, we additionally have the lower bound Lλ(Θ′) ≥ 1n log(1 + exp(−γ ′‖Θ′‖a)) ≥ 1n exp(−γ′‖Θ′‖a) 1+exp(−γ′‖Θ′‖a) . Since L(UB) ≤ 1, we can rearrange to get γ′ ≥ − log nLλ(Θ ′) 1−nLλ(Θ′) ‖Θ′‖a ≥ − log nL (UB) 1−nL(UB) ‖Θ′‖a ≥ − log(2nL (UB)) ‖Θ′‖a The middle inequality followed because x1−x is increasing in x for 0 ≤ x < 1, and the last because L(UB) ≤ 12n . Since − log 2nL (UB) > 0 we can also apply the bound ‖Θ′‖r ≤ L (UB) λ to get γ′ ≥ −λ a/r log 2nL(UB) (L(UB))a/r = − log ( 2nβ λ (γ?)r/a ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)) βa/r γ? ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r (by definition of L(UB)) ≥ γ ? βa/r log( (γ ?)r/a 2βnλ )( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r ︸ ︷︷ ︸ ♣ − log ( 1 + ( log (l−1)(γ ?)r/a λ )r/a) ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r ︸ ︷︷ ︸ ♥ We will first bound ♣. First note that log( (γ ?)r/a 2βnλ ) log (l−1)(γ ?)r/a λ = log (γ ?)r/a λ − log 2βn log (γ ?)r/a λ + log(l − 1) ≥ log (γ ?)r/a λ − log 2βn(l − 1) log (γ ?)r/a λ ≥ c− 3 c (A.7) where the last inequality follows from the fact that (γ ?)r/a λ ≥ n c(l − 1)c and β ≤ 2. Next, using the fact that log (γ ?)r/a λ ≥ 1 (2r/a−1)a/r , we note that( 1 + ( log (l − 1)(γ?)r/a λ )−r/a)a/r ≤ ( 1 + ( 1 (2r/a − 1)a/r )−r/a)a/r ≤ 2 (A.8) Combining equation A.7 and equation A.8, we can conclude that ♣ = log( (γ ?)r/a 2βnλ ) log (l−1)(γ ?)r/a λ ( 1 + ( log (l − 1)(γ?)r/a λ )−r/a)−a/r ≥ c− 3 2c Finally, we note that if 1 + ( log (l−1)(γ ?)r/a λ )r/a is a sufficiently large constant that depends only on a/r (which can be achieved by choosing c sufficiently large), it will follow that ♥ ≤ 110 . Thus, if c ≥ 5, we can combine our bounds on ♣ and ♥ to get that γ′ ≥ γ ? 10βa/r B MISSING PROOF OF PROPOSITION 4.1 Proposition 4.1 follows simply from applying Corollary 1 of Neyshabur et al. (2014) to a hard-margin SVM problem. For completeness, we provide another proof here. The proof of Proposition 4.1 will consist of two steps: first, show that equation 4.2 has an optimal solution with sparsity n, and second, show that sparse solutions to equation 4.2 can be mapped to a neural network with the same margin, and vice versa. The following lemma and proof are based on Lemma 14 of Tibshirani (2013). Lemma B.1. Let supp(α) , {ū : |α(ū)| > 0}. There exists an optimal solution α? to equation 4.2 with |supp(α?)| ≤ n. For the proof of this lemma, we find it convenient to work with a minimum norm formulation which we show is equivalent to equation 4.2: min α ‖α‖1 subject to yi〈α,ϕ(xi)〉 ≥ 1 ∀i (B.1) Claim B.2. Let S ⊂ L1(Sd−1) be the set of optimizers for equation 4.2, and let S′ ⊂ L1(Sd−1) be the set of optimizers for equation B.1. If equation B.1 is feasible, for any α ∈ S, αγ`1 ∈ S ′, and for any α′ ∈ S′, α ′ ‖α′‖1 ∈ S. Proof. Let opt′ denote the optimal objective for equation B.1. We note that α ′ ‖α′‖1 is feasible for equation 4.2 with objective 1opt′ , and therefore γ`1 ≥ 1 opt′ . Furthermore, 1 2γ`1 yi ∫ ū∈Sd−1 α(ū)φ(ū >xi)dū ≥ 1 ∀i, and so αγ`1 is feasible for equation B.1 with objective 1 γ`1 . Therefore, opt′ ≤ 1γ`1 . As a result, it must hold that opt ′ = 1γ`1 , which means that α ′ ‖α′‖1 is optimal for equation 4.2, and αγ`1 is optimal for equation B.1, as desired. First, note that if equation B.1 is not feasible, then γ`1 = 0 and equation 4.2 has a trivial sparse solution, the all zeros function. Thus, it suffices to show that an optimal solution to equation B.1 exists that is n-sparse, since by Lemma B.2 equation B.1 and equation 4.2 have equivalent solutions up to a scaling. We begin by taking the dual of equation B.1. Claim B.3. The dual of equation B.1 has form max λ∈Rn λ>~1 subject to ∣∣∣∣∣ n∑ i=1 λiyiφ(ū >xi) ∣∣∣∣∣ ≤ 1 ∀ū ∈ Sd−1 λi ≥ 0 For any primal optimal solution α? and dual optimal solution λ?, it must hold that n∑ i=1 λ?i yiφ(ū >xi) = sign(α ?(ū)) ⇐⇒ α?(ū) 6= 0 (B.2) Proof. The dual form can be solved for by computation. By strong duality, equation B.2 must follow from the KKT conditions. Now define the mapping v : Sd−1 → Rn with vi(ū) , yiφ(ū>xi). We will show a general result about linearly dependent v(ū) for ū ∈ supp(α?), after which we can reduce directly to the proof of Tibshirani (2013). Claim B.4. Let α? be any optimal solution. Suppose that there exists S ⊆ supp(α?) such that {v(ū) : ū ∈ S} forms a linearly dependent set, i.e.∑ ū∈S cūv(ū) = ~0 (B.3) for coefficients c. Then ∑ ū∈S cū sign(α ?(ū)) = 0. Proof. Let λ? be any dual optimal solution, then λ?>v(ū) = sign(α?(ū)) ∀ū ∈ supp(α?) by Claim B.3. Thus, we apply λ?> to both sides of equation B.3 to get the desired statement. Proof of Lemma B.1. The rest of the proof follows Lemma 14 in Tibshirani (2013). The lemma argues that if the conclusion of Claim B.4 holds and an optimal solution α? has S ⊆ supp(α?) with {v(ū) : ū ∈ S} linearly dependent, we can construct a new α′ with ‖α′‖1 = ‖α?‖1 and supp(α′) ⊂ supp(α?) (where the inclusion is strict). Thus, if we consider an optimal α? with minimal support, it must follow that {v(ū) : ū ∈ supp(α?)} is a linearly independent set, and therefore |supp(α?)| ≤ n. We can now complete the proof of Proposition 4.1. Proof of Proposition 4.1. For ease of notation, we will parametrize a two-layer network with m units by top layer weights w1, . . . , wm ∈ R and bottom layer weights u1, . . . , um ∈ Rd. As before, we use Θ to refer to the collection of parameters, so the network computes the real-valued function f(Θ;x) = m∑ j=1 wjφ(u > j x) Note that we simply renamed the variables from the parametrization of equation 3.1. We first apply Lemma B.1 to conclude that equation 4.2 admits a n-sparse optimal solution α?. Because of sparsity, we can now abuse notation and treat α? as a real-valued function such that∑ ū∈supp(α?) |α?(ū)| ≤ 1. We construct Θ corresponding to a two-layer network with m ≥ n hidden units and normalized margin at least γ`12 . For clarity, we let W correspond to the top layer weights and U correspond to the bottom layer weights. For every ū ∈ supp(α), we let Θ have a corresponding hidden unit j with (wj , uj) = ( sign(α?(ū)) √ |α?(ū)| 2 , √ |α?(ū)| 2 ū ) , and set the remaining hidden units to ~0. This is possible because m ≥ n. Now f(Θ;x) = m∑ j=1 wjφ(u > j x) = 1 2 ∑ ū∈supp(α?) α?(ū)φ(ū>x) Furthermore, ‖Θ‖22 = m∑ j=1 w2j + ‖uj‖22 = ∑ ū∈supp(α) |α?(ū)| 2 + |α?(ū)| 2 ‖ū‖22 = ∑ ū∈supp(α) |α?(ū)| ≤ 1 Thus it follows that Θ has normalized margin at least γ`1/2, so γ ?,m ≥ γ`1/2. To conclude, we show that γ?,m ≤ γ`1/2. Let Θ?,m denote the parameters obtaining optimal m-unit margin γ?,m with hidden units (w?,mj , u ?,m j ) for j ∈ [m]. We can construct α to put a scaled delta mass of 2w?,mj ‖u ?,m j ‖2 on ū ?,m j for j ∈ [m]. It follows that ‖α‖1 = m∑ j=1 2|w?,mj |‖u ?,m j ‖2 ≤ m∑ j=1 w?,mj 2 + ‖u?,mj ‖ 2 2 = ‖Θ?,m‖22 ≤ 1 Furthermore, ∫ Sd−1 α(ū)φ(ū>x) = 2 m∑ j=1 w?,mj ‖u ?,m j ‖2φ((ū ?,m j ) >x) = 2 m∑ j=1 w?,mj φ(u ?,m j > x) = 2f(Θ?,m;x) Thus, α is a feasible solution to equation 4.2 with objective value at least 2γ?,m. Therefore, γ`1 ≥ 2γ?,m, so γ?,m = γ`1/2. C RADEMACHER COMPLEXITY AND GENERALIZATION ERROR We prove the generalization error bounds stated in Proposition 3.1 and Lemma 4.2 via Rademacher complexity and margin theory. Assume that our data X,Y are drawn i.i.d. from ground truth distribution pdata supported on X × Y . For some hypothesis classF of real-valued functions, we define the empirical Rademacher complexity R̂(F) as follows: R̂(F) , 1 n E i [ sup f∈F n∑ i=1 if(xi) ] where i are independent Rademacher random variables. For a classifier f , following the notation of Section 3 we will use L(f) , Pr(x,y)∼pdata(yf(x) ≤ 0) to denote the population 0-1 loss of the classifier f . The following classical theorem (Koltchinskii et al., 2002), (Kakade et al., 2009) bounds generalization error in terms of the Rademacher complexity and margin loss. Theorem C.1 (Theorem 2 of Kakade et al. (2009)). Let (xi, yi)ni=1 be drawn iid from pdata. We work in the binary classification setting, so Y = {−1, 1}. Assume that for all f ∈ F , we have supx∈X f(x) ≤ C. Then with probability at least 1− δ over the random draws of the data, for every γ > 0 and f ∈ F , L(f) ≤ 1 n n∑ i=1 1(yif(xi) < γ) + 4R̂(F) γ + √ log log2 4C γ n + √ log(1/δ) 2n C.1 PROOF OF PROPOSITION 3.1 We will prove Proposition 3.1 by applying the Rademacher complexity bounds of Golowich et al. (2017) with Theorem C.1. First, we show the following lemma bounding the generalization of neural networks whose weight matrices have bounded Frobenius norms. Lemma C.2. Define the hypothesis class FK over depth-K neural networks by FK = { f(Θ; ·) : ‖Wj‖F ≤ 1√ K ∀j } Let C , supx∈X ‖x‖2. Recall that L(Θ) denotes the 0-1 population loss L(f(Θ; ·)). Then for any f(Θ; ·) ∈ FK classifying the training data correctly with unnormalized margin γΘ , mini yif(Θ;xi) > 0, with probability at least 1− δ, L(Θ) . C γΘK(K−1)/2 √ n + √ log log2 4C γΘ n + √ log(1/δ) n (C.1) Note the dependence on the unnormalized margin rather than the normalized margin. Proof. We first claim that supf(Θ;·)∈FK supx∈X f(Θ;x) ≤ C. To see this, for any f(Θ; ·) ∈ FK , f(Θ;x) = WKφ(· · ·φ(W1x) · · · ) ≤ ‖WK‖F ‖φ(WK−1φ(· · ·φ(W1x) · · · )‖2 ≤ ‖WK‖F ‖WK−1φ(· · ·φ(W1x) · · · )‖2 (since φ is 1-Lipschitz and φ(0) = 0, so φ performs a contraction) < ‖x‖2 ≤ C (repeatedly applying this argument and using ‖Wj‖F < 1) Furthermore, by Theorem 1 of Golowich et al. (2017), R̂(FK) has upper bound R̂(FK) . C K(K−1)/2 √ n Thus, we can apply Theorem C.1 to conclude that for all f(Θ; ·) ∈ FK and all γ > 0, with probability 1− δ, L(Θ) . 1 n n∑ i=1 1(yif(Θ;xi) < γ) + C γK(K−1)/2 √ n + √ log log2 4C γ n + √ log(1/δ) n In particular, by definition choosing γ = γΘ makes the first term on the LHS vanish and gives the statement of the lemma. Proof of Proposition 3.1. Given parameters Θ = (W1, . . . ,WK), we first construct parameters Θ̃ = (W̃1, . . . , W̃K) such that f(Θ̄; ·) and f(Θ̃; ·) compute the same function, and ‖W̃1‖2F = ‖W̃2‖2F = · · · = ‖W̃K‖2F ≤ 1K . To do this, we set W̃j = ( ∏K k=1 ‖Wk‖F )1/k ‖Wj‖F ‖Θ‖F Wj By construction ‖W̃j‖2F = ( ∏K k=1 ‖Wk‖2F )1/k ‖Θ‖2F = ( ∏K k=1 ‖Wk‖2F )1/k∑K k=1 ‖Wk‖2F ≤ 1 k (by the AM-GM inequality) Furthermore, we also have f(Θ̃;x) = W̃Kφ(· · ·φ(W̃1x) · · · ) = K∏ j=1 ( ∏K k=1 ‖Wk‖F )1/k ‖Wj‖F ‖Θ‖F WKφ(· · ·φ(W1x) · · · ) (by the homogeneity of φ) = 1 ‖Θ‖KF f(Θ;x) = f ( Θ ‖Θ‖F ;x ) (since f is K-homogeneous in Θ) = f(Θ̄;x) Now we note that by construction, L(Θ) = L(Θ̃). Now f(Θ̃; ·) must also classify the training data perfectly, has unnormalized margin γ, and furthermore f(Θ̃; ·) ∈ FK . As a result, Lemma C.2 allows us to conclude the desired statement. To conclude Corollary 3.2, we apply the above on Θλ,M and use Theorem A.3. C.2 PROOF OF KERNEL GENERALIZATION BOUNDS Let F2,φB denote the class of `2-bounded linear functionals in lifted feature space: F 2,φ B , {x 7→ 〈α,ϕ(x)〉 : α ∈ L2(Sd−1), ‖α‖2 ≤ B}. We abuse notation and write α ∈ F2,φB to indicate a linear functional from F2,φB . As before, we will use L(α) to indicate the 0-1 population loss of the classifier x 7→ 〈α,ϕ(x)〉 and let C , supx∈X ‖x‖2 be an upper bound on the norm of the data. We focus on analyzing the Rademacher complexity R̂(F2,φB ), mirroring derivations done in the past (Bartlett & Mendelson, 2002). We include our derivations here for completeness. Lemma C.3. R̂(F2,φB ) ≤ 1 nB √∑n i=1 ‖ϕ(xi)‖22. Proof. We write R̂(F2,φB ) = 1 n E i [ sup α∈F2,φB 〈α, n∑ i=1 iϕ(xi)〉 ] ≤ 1 n E i [ sup α∈F2,φB ‖α‖2 ∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 ] ≤ 1 n B · E i [∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 ] ≤ 1 n B √√√√√E i ∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 2 (via Jensen’s inequality) ≤ 1 n B √√√√√E i n∑ i=1 n∑ j=1 i j〈ϕ(xi), ϕ(xi)〉 ≤ 1 n B √√√√ n∑ i=1 ‖ϕ(xi)‖22 (terms where i 6= j cancel out) As an example, we can apply this bound to relu features: Corollary C.4. Suppose that φ is the relu activation. Let κ , Vol(Sd−1). Then R̂(F2,φB ) . B‖X‖F √ κ n √ d ≤ BC √ κ√ dn . Proof. We first show that ‖ϕ(xi)‖22 = Θ ( κ d‖xi‖ 2 2 ) . We can compute ‖ϕ(xi)‖22 = Vol(Sd−1)Eū∼Sd−1 [relu(ū>xi)2] = κ d Eū∼Sd−1 [relu( √ dū>xi) 2] = κ d 1 M2 Eu∼N (0,Id×d)[relu(u Txi) 2] (M2 is the second moment of N (0, 1)) = Θ (κ d ‖xi‖22 ) (C.2) where the last line uses the computation provided in Lemma A.1 by Du et al. (2017). Now we plug this into Lemma C.3 to get the desired bound. We will now prove Lemma 4.2. Proof of Lemma 4.2. From equation C.2, we first obtain supx∈X ‖ϕ(x)‖2 . C √ κ d . Denote the optimizer for equation 4.3 by α`2 . Note that √ κα`2 ∈ F 2,φ 1 , and furthermore L(α`2) = L( √ κα`2). Since √ κα`2 has unnormalized margin √ κγ`2 , we apply Theorem C.1 on margin √ κγ`2 and hypothesis class F2,φ1 to get with probability 1− δ, L`2-svm = L( √ κα`2) ≤ 4R̂(F2,φ1 )√ κγ`2 + √ log log2 4 supx∈X ‖ϕ(x)‖2√ κγ`2 n + √ log(1/δ) 2n . C γ`2 √ dn + √√√√ log max{log2 C√dγ`2 , 2} n + √ log(1/δ) n (applying Corollary C.4) D MISSING PROOFS FOR COMPARISON TO KERNEL METHODS D.1 CLASSIFICATION In this section we will complete a proof of Theorem 4.3. Recall the construction of the distribution D provided in Section 4. We first provide a classifier of this data with small `1 norm. Lemma D.1. In the setting of Theorem 4.3, we have that γ`1 ≥ √ 2 4 . Proof. Consider the network f(x) = 14 ( (x>(e1 +e2)/ √ 2)+ +(x >(−e1−e2)/ √ 2)+− (x>(−e1 + e2)/ √ 2)+ − (x>(e1 − e2)/ √ 2)+ ) . The attained margin γ = √ 2 4 , so γ`1 ≥ √ 2 4 . Now we will upper bound the margin attainable by the `2 SVM. Lemma D.2 (Margin upper bound tool). In the setting of Theorem 4.3, we have γ`2 ≤ 1√ κ · ∥∥∥∥∥ 1n n∑ i=1 ϕ(xi)yi ∥∥∥∥∥ 2 Proof. By the definition of γ`2 , we have that for any α with √ κ‖α‖2 ≤ 1, we have γ`2 ≤ max√ κ‖α‖2≤1 1 n n∑ i=1 〈α, yiϕ(xi)〉 Setting α = 1√ κ 1 n ∑n i=1 ϕ(xi)yi/‖ 1 n ∑n i=1 ϕ(xi)yi‖2 completes the proof. (Attentive readers may realize that this is equivalent to setting the dual variable of the convex program 4.3 to all 1’s function.) Lemma D.3. In the setting of Theorem 4.3, let (xi, yi)ni=1 be n i.i.d samples and corresponding labels from D. Let ϕ be defined in equation 4.1 with φ = relu. With high probability (at least 1− dn−10), we have ∥∥∥∥∥ 1n n∑ i=1 ϕ(xi)yi ∥∥∥∥∥ 2 . √ κ/n log n+ √ κ/d Proof. Let Wi = ϕ(xi)yi. We will bound several quantities regarding Wi’s. In the rest of the proof, we will condition on the event E that ∀i, ‖xi‖22 . d log n. Note that E is a high probability event and conditioned on E, xi’s are still independent. We omit the condition on E in the rest of the proof for simplicity. We first show that assuming the following three inequalities that the conclusion of the Lemma follows. 1. ∀i, ‖Wi‖22 . κ log n . 2. σ2 , Var[ ∑ iWi] , ∑n i=1 E[‖Wi − EWi‖22] . nκ log n 3. ‖E [ ∑ Wi] ‖2 . √ κn/d. By bullets 1, 2, and Bernstein inequality, we have that with probability at least 1− dn−10 over the randomness of the data (X,Y ),∥∥∥∥∥ n∑ i=1 Wi − E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 . √ κ log1.5 n+ √ nκ log2 n . √ nκ log2 n By bullet 3 and equation above, we complete the proof with triangle inequality:∥∥∥∥∥ n∑ i=1 Wi ∥∥∥∥∥ 2 ≤ ∥∥∥∥∥E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 + √ nκ log2 n . √ nκ log2 n+ √ κn/d Therefore, it suffices to prove bullets 1, 2 and 3. Note that 2 is a direct corollary of 1 so we will only prove 1 and 3. We start with 3: By the definition of the `2 norm in L2(Sd−1) and the independence of (xi, yi)’s, we can rewrite∥∥∥∥∥E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 2 = κ · n2 E ū∼Sd−1 [ E (x,y)∼D ϕ(x)[ū] · y ]2 (D.1) Let ū = (ū1, . . . , ūd) and ū−2 = (ū3, . . . , ūd) ∈ Rd−2, and define τ
1. What is the main contribution of the paper regarding margin theory for neural networks? 2. What are the strengths of the paper, particularly in its theoretical analysis and results? 3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Review
Review UPDATE: after revisions and discussion. There seems to be some interesting results presented in this paper which I think would be good to have discussed at the conference. This is conditional on further revisions of the work by the authors. This paper studies margin theory for neural nets. 1. First it is shown that margin of the solution to regularized problem approaches max margin solution. 2. Then a bound is given for using approximate solution to above optimization problem instead of exact one. Note that the bound depends on size of the network via parameter a. 3. Then 2-layer relu networks are studied. It shown that max margin is monotonically increasing in size of the network. Note however, it is hard to relate this results to inexact solutions since the bound in that case as was pointed out also depends on the size of the network. 4. Paper also provides comparison with kernel methods, simulations and shows that perturbed wasserstein flows find global optimiziers in poly time. The paper argues that over-parameterization is good for generalization since margin grows with the number of parameters. However, it should be also noted the radius of data may also grow (and in case of the bounds it seems to be the radius of data in lifted space which increases with the size of the network). I hope authors can clarify this and points 2 and 3 above in their response. In the current form the paper is below the acceptance threshold for me.
ICLR
Title On the Margin Theory of Feedforward Neural Networks Abstract Past works have shown that, somewhat surprisingly, over-parametrization can help generalization in neural networks. Towards explaining this phenomenon, we adopt a margin-based perspective. We establish: 1) for multi-layer feedforward relu networks, the global minimizer of a weakly-regularized cross-entropy loss has the maximum normalized margin among all networks, 2) as a result, increasing the over-parametrization improves the normalized margin and generalization error bounds for deep networks. In the case of two-layer networks, an infinite-width neural network enjoys the best generalization guarantees. The typical infinite feature methods are kernel methods; we compare the neural net margin with that of kernel methods and construct natural instances where kernel methods have much weaker generalization guarantees. We validate this gap between the two approaches empirically. Finally, this infinite-neuron viewpoint is also fruitful for analyzing optimization. We show that a perturbed gradient flow on infinite-size networks finds a global optimizer in polynomial time. 1 INTRODUCTION In deep learning, over-parametrization refers to the widely-adopted technique of using more parameters than necessary (Krizhevsky et al., 2012; Livni et al., 2014). Both computationally and statistically, over-parametrization is crucial for learning neural nets. Controlled experiments demonstrate that over-parametrization eases optimization by smoothing the non-convex loss surface (Livni et al., 2014; Sagun et al., 2017). Statistically, increasing model size without any regularization still improves generalization even after the model interpolates the data perfectly (Neyshabur et al., 2017b). This is surprising given the conventional wisdom on the trade-off between model capacity and generalization. In the absence of an explicit regularizer, algorithmic regularization is likely the key contributor to good generalization. Recent works have shown that gradient descent finds the minimum norm solution fitting the data for problems including logistic regression, linearized neural networks, and matrix factorization (Soudry et al., 2018; Gunasekar et al., 2018b; Li et al., 2018; Gunasekar et al., 2018a; Ji & Telgarsky, 2018). Many of these proofs require a delicate analysis of the algorithm’s dynamics, and some are not fully rigorous due to assumptions on the iterates. To the best of our knowledge, it is an open question to prove analogous results for even two-layer relu networks. (For example, the technique of Li et al. (2018) on two-layer neural nets with quadratic activations still falls within the realm of linear algebraic tools, which apparently do not suffice for other activations.) We propose a different route towards understanding generalization: making the regularization explicit. The motivations are: 1) with an explicit regularizer, we can analyze generalization without fully understanding optimization; 2) it is unknown whether gradient descent provides additional implicit regularization beyond what `2 regularization already offers; 3) on the other hand, with a sufficiently weak `2 regularizer, we can prove stronger results that apply to multi-layer relu networks. Additionally, explicit regularization is perhaps more relevant because `2 regularization is typically used in practice. Concretely, we add a norm-based regularizer to the cross entropy loss of a multi-layer feedforward neural network with relu activations. We show that the global minimizer of the regularized objective achieves the maximum normalized margin among all the models with the same architecture, if the regularizer is sufficiently weak (Theorem 2.1). Informally, for models with norm 1 that perfectly classify the data, the margin is the smallest difference across all datapoints between the classifier score for the true label and the next best score. We are interested in normalized margin because its inverse bounds the generalization error (see recent work (Bartlett et al., 2017; Neyshabur et al., 2017a; 2018; Golowich et al., 2017) or Proposition 3.1). Our work explains why optimizing the training loss can lead to parameters with a large margin and thus, better generalization error (see Corollary 3.2). We further note that the maximum possible margin is non-decreasing in the width of the architecture, and therefore the generalization bound of Corollary 3.2 can only improve as the size of the network grows (see Theorem 3.3). Thus, even if the dataset is already separable, it could still be useful to increase the width to achieve larger margin and better generalization. At a first glance, it might seem counterintuitive that decreasing the regularizer is the right approach. At a high level, we show that the regularizer only serves as a tiebreaker to steer the model towards choosing the largest normalized margin. Our proofs are simple, oblivious to the optimization procedure, and apply to any norm-based regularizer. We also show that an exact global minimum is unnecessary: if we approximate the minimum loss within a constant factor, we obtain the max-margin within a constant factor (Theorem 2.2). To better understand the neural network max-margin, in Section 4 we compare the max-margin two-layer network obtained by optimizing both layers jointly to kernel methods corresponding to fixing random weights for the hidden layer and solving a 2-norm max-margin on the top layer. We design a simple data distribution (Figure 1) where neural net margin is large but the kernel margin is small. This translates to an Ω( √ d) factor gap between the generalization error bounds for the two approaches and demonstrates the power of neural nets compared to kernel methods. We experimentally confirm that a gap does indeed exist. In the setting of two-layer networks, we also study how over-parametrization helps optimization. Prior works (Mei et al., 2018; Chizat & Bach, 2018; Sirignano & Spiliopoulos, 2018; Rotskoff & Vanden-Eijnden, 2018) show that gradient descent on two-layer networks becomes Wasserstein gradient flow over parameter distributions in the limit of infinite neurons. For this setting, we prove that perturbed Wasserstein gradient flow finds a global optimizer in polynomial time. Finally, we empirically validate several claims made in this paper. First, we confirm that neural networks do generalize better than kernel methods. Second, we show that for two-layer networks, the test error decreases and margin increases as the hidden layer grows, as predicted by our theory. 1.1 ADDITIONAL RELATED WORK Zhang et al. (2016) and Neyshabur et al. (2017b) show that neural network generalization defies conventional explanations and requires new ones. Neyshabur et al. (2014) initiate the search for the “inductive bias” of neural networks towards solutions with good generalization. Recent papers (Hardt et al., 2015; Brutzkus et al., 2017; Chaudhari et al., 2016) study inductive bias through training time and sharpness of local minima. Neyshabur et al. (2015a) propose a new steepest descent algorithm in a geometry invariant to weight rescaling and show that this improves generalization. Morcos et al. (2018) relate generalization in deep nets to the number of “directions” in the neurons. Other papers (Gunasekar et al., 2017; Soudry et al., 2018; Nacson et al., 2018; Gunasekar et al., 2018b; Li et al., 2018; Gunasekar et al., 2018a) study implicit regularization towards a specific solution. Ma et al. (2017) show that implicit regularization can help gradient descent avoid overshooting optima. Rosset et al. (2004a;b) study logistic regression with a weak regularization and show convergence to the max margin solution. We adopt their techniques and extend their results. A line of work initiated by Neyshabur et al. (2015b) has focused on deriving tighter norm-based Rademacher complexity bounds for deep neural networks (Bartlett et al., 2017; Neyshabur et al., 2017a; Golowich et al., 2017) and new compression based generalization properties (Arora et al., 2018b). Dziugaite & Roy (2017) manage to compute non-vacuous generalization bounds from PAC-Bayes bounds. Neyshabur et al. (2018) investigate the Rademacher complexity of two-layer networks and propose a bound that is decreasing with the distance to initialization. Liang & Rakhlin (2018) and Belkin et al. (2018) study the generalization of kernel methods. On the optimization side, Soudry & Carmon (2016) explain why over-parametrization can remove bad local minima. Safran & Shamir (2016) show that over-parametrization can improve the quality of the random initialization. Haeffele & Vidal (2015), Nguyen & Hein (2017), and Venturi et al. (2018) show that for sufficiently overparametrized networks, all local minima are global, but do not show how to find these minima via gradient descent. Du & Lee (2018) show that for two-layer networks with quadratic activations, all second-order stationary points are global minimizers. Arora et al. (2018a) interpret over-parametrization as a means of implicit acceleration during optimization. Mei et al. (2018), Chizat & Bach (2018), and Sirignano & Spiliopoulos (2018) take a distributional view of over-parametrized networks. Chizat & Bach (2018) show that Wasserstein gradient flow converges to global optimizers under structural assumptions. We extend this to a polynomial-time result. 1.2 NOTATION Let R denote the set of real numbers. We will use ‖·‖ to indicate a general norm, with ‖·‖1, ‖·‖2, ‖·‖∞ denoting the `1, `2, `∞ norms on finite dimensional vectors, respectively, and ‖ · ‖F denoting the Frobenius norm on a matrix. In general, we use ¯ on top of a symbol to denote a unit vector: when applicable, ū , u/‖u‖, where the norm ‖ · ‖ will be clear from context. Let Sd−1 , {ū ∈ Rd : ‖ū‖2 = 1} be the unit sphere in d dimensions. Let Lp(Sd−1) be the space of functions on Sd−1 for which the p-th power of the absolute value is Lebesgue integrable. For α ∈ Lp(Sd−1), we overload notation and write ‖α‖p , (∫ Sd−1 |α(ū)| pdū )1/p . Additionally, for α1 ∈ L1(Sd−1) and α2 ∈ L∞(Sd−1) or α1, α2 ∈ L2(Sd−1), we can define 〈α1, α2〉 , ∫ Sd−1 α1(ū)α2(ū)dū < ∞. Furthermore, we will use Vol(Sd−1) , ∫ Sd−1 1dū. Throughout this paper, we reserve the symbol X = [x1, . . . , xn] to denote the collection of datapoints (as a matrix), and Y = [y1, . . . , yn] to denote labels. We use d to denote the dimension of our data. We often use Θ to denote the parameters of a prediction function f , and f(Θ;x) to denote the prediction of f on datapoint x. We will use the notation .,& to mean less than or greater than up to a universal constant, respectively. Unless stated otherwise, O(·),Ω(·) denote some universal constant in upper and lower bounds, respectively. The notation poly denotes a universal constant-degree polynomial in the arguments. 2 WEAK REGULARIZER GUARANTEES MAX MARGIN SOLUTIONS In this section, we will show that when we add a weak regularizer to cross-entropy loss with a positive-homogeneous prediction function, the normalized margin of the optimum converges to some max-margin solution. As a concrete example, feedforward relu networks are positive-homogeneous. Let l be the number of labels, so the i-th example has label yi ∈ [l]. We work with a family F of prediction functions f(Θ; ·) : Rd → Rl that are a-positive-homogeneous in their parameters for some a > 0: f(cΘ;x) = caf(Θ;x),∀c > 0. We additionally require that f is continuous in Θ. For some general norm ‖ · ‖, we study the λ-regularized cross-entropy loss Lλ, defined as Lλ(Θ) , n∑ i=1 − log exp(fyi(Θ;xi))∑l j=1 exp(fj(Θ;xi)) + λ‖Θ‖r (2.1) for fixed r > 0. Let Θλ ∈ arg minLλ(Θ).1 We define the normalized margin of Θλ as: γλ , min i ( fyi(Θ̄λ;xi)−max j 6=yi fj(Θ̄λ;xi) ) (2.2) Define the ‖ · ‖-max normalized margin as γ? , max ‖Θ‖≤1 [ min i ( fyi(Θ;xi)−max j 6=yi fj(Θ;xi) )] and let Θ? be a parameter achieving this maximum. We show that with sufficiently small regularization level λ, the normalized margin γλ approaches the maximum margin γ?. Our theorem and proof are inspired by the result of Rosset et al. (2004a;b), who analyze the special case when f is a linear predictor. In contrast, our result can be applied to non-linear f as long as f is homogeneous. Theorem 2.1. Assume the training data is separable by a network f(Θ?; ·) ∈ F with an optimal normalized margin γ? > 0. Then, the normalized margin of the global optimum of the weaklyregularized objective (equation 2.1) converges to γ? as the strength of the regularizer goes to zero. Mathematically, let γλ be defined in equation 2.2. Then γλ → γ? as λ→ 0 1We formally show that Lλ has a minimizer in Claim A.1 of Section A. An intuitive explanation for our result is as follows: because of the homogeneity, the loss L(Θλ) roughly satisfies the following (for small λ, and ignoring problem parameters such as n): Lλ(Θλ) ≈ exp(−‖Θλ‖aγλ) + λ‖Θλ‖r Thus, the loss selects parameters with larger margin, while the regularization favors parameters with a smaller norm. The full proof of the theorem is deferred to Section A.1. Theorem 2.1 applies to feedforward relu networks and states that global minimizers of the weaklyregularized loss will obtain a maximum margin among all networks of the given architecture. By considering global minimizers, Theorem 2.1 provides a framework for directly analyzing generalization properties of the solution without considering details of the optimization algorithm. In Section 3 we leverage this framework and existing generalization bounds (Golowich et al., 2017) to provide a clean argument that over-parameterization can improve generalization. We can also provide an analogue of Theorem 2.1 for the binary classification setting. For this setting, our prediction is now a single real output and we train using logistic loss. We provide formal definitions and results in Section A.2. Our study of the generalization properties of the max-margin (see Section 3 and Section 4) is based in this setting. 2.1 OPTIMIZATION ACCURACY Since Lλ is typically hard to optimize exactly for neural nets, we study how accurately we need to optimize Lλ to obtain a margin that approximates γ? up to a constant. The following theorem shows that it suffices to find Θ′ achieving a constant factor multiplicative approximation of Lλ(Θλ), where λ is some sufficiently small polynomial in n, l, γ?. Though our theorem is stated for the general multi-class setting, it also applies for binary classification. We provide the proof in Section A.3. Theorem 2.2. In the setting of Theorem 2.1, suppose that we choose λ = exp(−(2r/a − 1)−a/r) (γ ?)r/a nc(l − 1)c for sufficiently large c (that only depends on r/a). For β ≤ 2, let Θ′ denote a β-approximate minimizer of Lλ, so Lλ(Θ′) ≤ βLλ(Θλ). Denote the normalized margin of Θ′ by γ′. Then γ′ ≥ γ ? 10 · βa/r . 3 GENERALIZATION PROPERTIES OF A MAXIMUM MARGIN NEURAL NETWORK In Section 2 we showed that optimizing a weakly-regularized logistic loss leads to the maximum normalized margin. We now study the direct implications of this result on the generalization properties of the solution. Specifically, we use existing Rademacher complexity bounds of Golowich et al. (2017) to present a generalization bound that depends on the network architecture only through the inverse `2-normalized margin and depth of the network (see Proposition 3.1). Next, we combine this bound with Theorem 2.1 to conclude that parameters obtained by optimizing logistic loss with weak `2-regularization will have a generalization bound that scales with the inverse of the maximum possible margin and depth. Finally, we note that the maximum possible margin can only increase as the size of the network grows, which suggests that increasing the size of the network improves the generalization of the solution (see Theorem 3.3). We consider depth-K neural networks with 1-Lipschitz, 1-positive-homogeneous activation φ for K ≥ 2. Suppose that the collection of parameters Θ is given by matrices W1, . . . ,WK . The K-layer network will compute a real-valued score f(Θ;x) ,WKφ(WK−1φ(· · ·φ(W1x) · · · )) (3.1) where we overload notation to let φ(·) denote the element-wise application of the activation φ. Let mi denote the size of the i-th hidden layer, so W1 ∈ Rm1×d,W2 ∈ Rm2×m1 , · · · ,WK ∈ R1×mK−1 . We will letM , (m1, . . . ,mK−1) denote the sequence of hidden layer sizes. We will focus on `2-regularized loss. The weakly-regularized logistic loss of the depth-K architecture with hidden layer sizesM is therefore Lλ,M(Θ) , 1 n n∑ i=1 log(1 + exp(−yif(Θ;xi))) + λ‖Θ‖2F (3.2) We note that f is K-homogeneous in Θ, so the results of Section 2 apply to Lλ,M.2 Following our conventions from Section 2, we denote the optimizer of Lλ,M by Θλ,M, the normalized margin of Θλ,M by γλ,M, the max-margin solution by Θ?,M, and the max-margin by γ?,M. Our notation emphasizes the architecture of the network. Since the classifier f now predicts a single real value, we need to redefine γλ,M , min i yif(Θ̄λ,M;xi) γ?,M , max ‖Θ‖2≤1 min i yif(Θ;xi) When the data is not separable by a neural network with architectureM, we define γ?,M to be zero. Recall that X = [x1, . . . , xn] denotes the matrix with all the data points as columns, and Y = [y1, . . . , yn] denotes the labels. We sample X and Y i.i.d. from the data generating distribution pdata, which is supported on X × {−1,+1}. We can define the population 0-1 loss and training 0-1 loss of the network parametrized by Θ by L(Θ) = Pr (x,y)∼pdata [yf(Θ;x) ≤ 0] Let C , supx∈X ‖x‖2 be an upper bound on the norm of a single datapoint. Proposition 3.1 shows that the generalization error only depends on the parameters through the inverse of the margin on the training data. We obtain Proposition 3.1 by applying Theorem 1 of Golowich et al. (2017) with the standard technique of using margin loss to bound classification error. There exist other generalization bounds which depend on the margin and some normalization (Neyshabur et al., 2015b; 2017a; Bartlett et al., 2017; Neyshabur et al., 2018); we choose the bounds of Golowich et al. (2017) because they fit well with `2 normalization. In the two-layer case K = 2, the bound below also follows from Neyshabur et al. (2015b). Proposition 3.1. [Straightforward consequence of Golowich et al. (2017, Theorem 1)] Suppose φ is 1-Lipschitz and 1-positive-homogeneous. For any depth-K network f(Θ; ·) separating the data with normalized margin γ , mini yif(Θ̄;xi) > 0, with probability at least 1− δ over the draw of X,Y , L(Θ) . C γK(K−1)/2 √ n + (γ) (3.3) where (γ) , √ log log2 4C γ n + √ log(1/δ) n . Note that (γ) is typically small, and thus the above bound mainly scales with C γK(K−1)/2 √ n . 3 For completeness, we state the proof in Section C.1. By combining this bound with our Theorem 2.1 we can conclude that optimizing weakly-regularized logistic loss gives us generalization error bounds that depend on the maximum possible margin of a network with the given architecture. Corollary 3.2. In the setting of Proposition 3.1, with probability 1− δ, lim sup λ→0 L(Θλ,M) . C γ?,MK(K−1)/2 √ n + (γ?,M) (3.4) where (γ) is defined as in Proposition 3.1. Above we implicitly assume γ?,M > 0, since otherwise the right hand side of the bound is vacuous. 2Although Theorem 2.1 is written in the language of multi-class prediction where the classifier outputs l ≥ 2 scores, the results translate to single-output binary classification. See Section A.2. 3Although the 1 K(K−1)/2 factor of equation 3.3 decreases with depth K, the margin γ will also tend to decrease as the constraint ‖Θ̄‖F ≤ 1 becomes more stringent. By applying Theorem 2.2 with Proposition 3.1, we can also conclude that optimizing Lλ,M within a constant factor gives a margin, and therefore generalization bound, approximating the best possible. One consequence of Corollary 3.2 is that optimizing weakly-regularized logistic loss results in the best possible generalization bound out of all models with the given architecture. This indicates that the widely used algorithm of optimizing deep networks with `2-regularized logistic loss has an implicit bias towards solutions with good generalization. Next, we observe that the maximum normalized margin is non-decreasing with the size of the architecture. Formally, for two depth-K architecturesM = (m1, . . . ,mK−1) andM′ = (m′1, . . . ,m′K−1), we sayM ≤ M′ if mi ≤ m′i ∀i = 1, . . .K − 1. Theorem 3.3 states that ifM ≤ M′, then the max-margin over networks with architecture M′ is at least the max-margin over networks with architectureM. Theorem 3.3. Recall that γ?,M denotes the maximum normalized margin of a network with architectureM. IfM≤M′, we have γ?,M ≤ γ?,M′ . As a important consequence, the generalization error bound of Corollary 3.2 forM′ is at least as good as that forM. This theorem is simple to prove and follows because we can directly implement any network of architectureM using one of architectureM′, ifM ≤M′. This can explain why additional overparameterization has been empirically observed to improve generalization in two-layer networks (Neyshabur et al., 2017b): the margin does not decrease with a larger network size, and therefore Corollary 3.2 gives a better generalization bound. In Section 6, we provide empirical evidence that the test error decreases with larger network size while the margin is non-decreasing. The phenomenon in Theorem 3.3 contrasts with standard `2-normalized linear prediction. In this setting, adding more features increases the norm of the data, and therefore the generalization error bounds could also increase. On the other hand, Theorem 3.3 shows that adding more neurons (which can be viewed as learned features) can only improve the generalization of the max-margin solution. 4 NEURAL NET MAX-MARGIN VS. KERNEL METHODS We will continue our study of the max-margin neural network via comparison against kernel methods, a context in which margins have already been extensively studied. We show that two-layer networks can obtain a larger margin, and therefore better generalization guarantees, than kernel methods. Our comparison between the two methods is motivated by an equivalence between the `2 max-margin of an infinite-width two-layer network and the `1-SVM (Zhu et al., 2004) over the lifted feature space defined by the activation function applied to all possible hidden units (Neyshabur et al., 2014; Rosset et al., 2007; Bengio et al., 2006). The kernel method corresponds to the `2-SVM in this same feature space, and is equivalent to fixing random hidden layer weights and solving an `2-SVM over the top layer. In Theorem 4.3, we construct a distribution for which the generalization upper bounds for the `1-SVM on this feature space are smaller than those for the `2-SVM by a Ω( √ d) factor. Our work provides evidence that optimizing all layers of a network can be beneficial for generalization. There have been works that compare `1 and `2-regularized solutions in the context of feature selection and construct a feature space for which a generalization gap exists (e.g., see Ng (2004)). In contrast, we work in the fixed feature space of relu activations, which makes our construction particularly challenging. We will usem to denote the width of the single hidden layer of the network. Following the convention from Section 3, we will use γ?,m to denote the maximum possible normalized margin of a two-layer network with hidden layer size m (note the emphasis on the size of the single hidden layer). The depth K = 2 case of Corollary 3.2 immediately implies that optimizing weakly-regularized `2 loss over width-m two-layer networks gives parameters whose generalization upper bounds depend on the hidden layer size only through 1/γ?,m. Furthermore, from Theorem 3.3 it immediately follows that γ?,1 ≤ γ?,2 ≤ · · · ≤ γ?,∞ The work of Neyshabur et al. (2014) links γ?,m to the `1 SVM over a lifted space. Formally, we define a lifting function ϕ : Rd → L∞(Sd−1) mapping data to an infinite feature vector: x ∈ Rd → ϕ(x) ∈ L∞(Sd−1) satisfying ϕ(x)[ū] = φ(ū>x) (4.1) where φ is the activation of Section 3. We look at the margin of linear functionals corresponding to α ∈ L1(Sd−1) . The 1-norm SVM (Zhu et al., 2004) over the lifted feature ϕ(x) solves for the maximum margin: γ`1 ,max α min i∈[n] yi〈α,ϕ(xi)〉 subject to ‖α‖1 ≤ 1 (4.2) where we rely on the inner product and 1-norm defined in Section 1.2. This formulation is equivalent to a hard-margin optimization on “convex neural networks” (Bengio et al., 2006). Bach (2017) also study optimization and generalization of convex neural networks. Using results from Rosset et al. (2007); Neyshabur et al. (2014); Bengio et al. (2006), our Theorem 2.1 implies that optimizing weaklyregularized logistic loss over two-layer networks is equivalent to solving equation 4.2 when the size of the hidden layer is at least n + 1, where n is the number of training examples. Proposition 4.1 essentially restates this with the minor improvement that this equivalence4 also holds when the size of the hidden layer is n. Proposition 4.1. Let γ`1 be defined in equation 4.2. Then γ`1 2 = γ ?,n = · · · = γ?,∞. For completeness, we prove Proposition 4.1 in Section B, relying on the work of Tibshirani (2013) and Rosset et al. (2004a). Importantly, the `1-max margin on the lifted feature space is obtainable by optimizing a finite neural network. We compare this to the `2 margin attainable via kernel methods. Following the setup of equation 4.2, we define the kernel problem over α ∈ L2(Sd−1): γ`2 ,max α min i∈[n] yi〈α,ϕ(xi)〉 subject to √ κ‖α‖2 ≤ 1 (4.3) where κ , Vol(Sd−1). (We scale ‖α‖2 by √ κ to make the lemma statement below cleaner.) First, γ`2 can be used to obtain a standard upper bound on the generalization error of the kernel SVM. Following the notation of Section 3, we will let L`2-svm denote the 0-1 population classification error for the optimizer of equation 4.3. Lemma 4.2. In the setting of Proposition 3.1, with probability at least 1−δ, the generalization error of the standard kernel SVM with relu feature (defined in equation 4.3) is bounded by L`2-svm . C γ`2 √ dn + `2 (4.4) where `2 , √ log max { log2 C√ dγ`2 ,2 } n + √ log(1/δ) n is typically a lower-order term. The bound above follows from standard techniques (Bartlett & Mendelson, 2002), and we provide a full proof in Section C.2. We construct a data distribution for which this lemma does not give a good bound for kernel methods, but Corollary 3.2 does imply good generalization for two-layer networks. Theorem 4.3. There exists a data distribution pdata such that the `1 SVM with relu features has a good margin: γ`1 & 1 and with probability 1− δ over the choice of i.i.d. samples from pdata, obtains generalization error L`1-svm . √ d log n n + `1 where `1 , √ log(1/δ) n is typically a lower order term. Meanwhile, with high probability the `2 SVM has a small margin: γ`2 . max {√ logn n , 1/d } and therefore the generalization upper bound from 4The factor of 1 2 is due the the relation that every unit-norm parameter Θ corresponds to an α in the lifted space with ‖α‖ = 2. Lemma 4.2 is at least Ω ( min { 1, d √ log n n }) In particular, the `2 bound is larger than the `1 bound by a Ω( √ d) factor. Although Theorem 4.3 compares upper bounds, our construction highlights properties of distributions which result in better neural network generalization than kernel method generalization. Furthermore, in Section 6 we empirically validate the gap in generalization between the two methods. We briefly overview the construction of pdata here. The full proof is in Section D.1. Proof sketch for Theorem 4.3. We base pdata on the distribution D of examples (x, y) described below. Here ei is the i-th standard basis vector and we use x>ei to represent the i-coordinate of x (since the subscript is reserved to index training examples).e > 3 x ... e>d x ∼ N (0, Id−2), and y = +1, x>e1 = +1, x >e2 = +1 w/ prob. 1/4 y = +1, x>e1 = −1, x>e2 = −1 w/ prob. 1/4 y = −1, x>e1 = +1, x>e2 = −1 w/ prob. 1/4 y = −1, x>e1 = −1, x>e2 = +1 w/ prob. 1/4 Figure 1 shows samples from D when there are 3 dimensions. From the visualization, it is clear that there is no linear separator for D. As Lemma D.1 shows, a relu network with four neurons can fit this relatively complicated decision boundary. On the other hand, for kernel methods, we prove that the symmetries in D induce cancellation in feature space. As a result, the features are less predictive of the true label and the margin will therefore be small. We formalize this argument in Section D.1. Gap in regression setting: We are able to prove an even larger Ω( √ n/d) gap between neural networks and kernel methods in the regression setting where we wish to interpolate continuous labels. Analogously to the classification setting, optimizing a regularized squared error loss on neural networks is equivalent to solving a minimum 1-norm regression problem (see Theorem D.5). Furthermore, kernel methods correspond to a minimum 2-norm problem. We construct distributions pdata where the 1-norm solution will have a generalization error bound of O( √ d/n), whereas the 2- norm solution will have a generalization error bound that is Ω(1) and thus vacuous. In Section D.2, we define the 1-norm and 2-norm regression problems. In Theorem D.10 we formalize our construction. 5 PERTURBED WASSERSTEIN GRADIENT FLOW FINDS GLOBAL OPTIMIZERS IN POLYNOMIAL TIME In the prior section, we studied the limiting behavior of the generalization of a two-layer network as its width goes to infinity. In this section, we will now study the limiting behavior of the optimization algorithm, gradient descent. Prior work (Mei et al., 2018; Chizat & Bach, 2018) has shown that as the hidden layer size grows to infinity, gradient descent for a finite neural network approaches the Wasserstein gradient flow over distributions of hidden units (defined in equation 5.1). Chizat & Bach (2018) assume the gradient flow converges, a non-trivial assumption since the space of distributions is infinite-dimensional, and given the assumption prove that Wasserstein gradient flow converges to a global optimizer in this setting, but do not specify a convergence rate. Mei et al. (2018) show global convergence for the infinite-neuron limit of stochastic Langevin dynamics, but also do not provide a convergence rate. We show that a perturbed version of Wasserstein gradient flow converges in polynomial time. The informal take-away of this section is that a perturbed version of gradient descent converges in polynomial time on infinite-size neural networks (for the right notion of infinite-size.) Formally, we optimize the following functional over distributions ρ on Rd+1: L[ρ] , R (∫ Φdρ ) + ∫ V dρ where Φ : Rd+1 → Rk, R : Rk → R, and V : Rd+1 → R. In this work, we consider 2-homogeneous Φ and V . We will additionally require that R is convex and nonnegative and V is positive on the unit sphere. Finally, we need standard regularity assumptions on R,Φ, and V : Assumption 5.1 (Regularity conditions on Φ, R, V ). Φ and V are differentiable as well as upper bounded and Lipschitz on the unit sphere. R is Lipschitz and its Hessian has bounded operator norm. We provide more details on the specific parameters (for boundedness, Lipschitzness, etc.) in Section E.1. We note that relu networks satisfy every condition but differentiability of Φ.5 We can fit a neural network under our framework as follows: Example 5.2 (Logistic loss for neural networks). We interpret ρ as a distribution over the parameters of the network. Let k , n and Φi(θ) , wφ(u>xi) for θ = (w, u). In this case, ∫ Φdρ is a distributional neural network that computes an output for each of the n training examples (like a standard neural network, it also computes a weighted sum over hidden units). We can compute the distributional version of the regularized logistic loss in equation 3.2 by setting V (θ) , λ‖θ‖22 and R(a1, . . . , an) , ∑n i=1 log(1 + exp(−yiai)). We will define L′[ρ] : Rd+1 → R with L′[ρ](θ) , 〈R′( ∫ Φdρ),Φ(θ)〉 + V (θ) and v[ρ](θ) , −∇θL′[ρ](θ). Informally, L′[ρ] is the gradient of L with respect to ρ, and v is the induced velocity field. For the standard Wasserstein gradient flow dynamics, ρt evolves according to d dt ρt = −∇ · (v[ρt]ρt) (5.1) where ∇· denotes the divergence of a vector field. For neural networks, these dynamics formally define continuous-time gradient descent when the hidden layer has infinite size (see Theorem 2.6 of Chizat & Bach (2018), for instance). We propose the following modification of the Wasserstein gradient flow dynamics: d dt ρt = −σρt + σUd −∇ · (v[ρt]ρt) (5.2) where Ud is the uniform distribution on Sd. In our perturbed dynamics, we add very small uniform noise over Ud, which ensures that at all time-steps, there is sufficient mass in a descent direction for the algorithm to decrease the objective. For infinite-size neural networks, one can informally interpret this as re-initializing a very small fraction of the neurons at every step of gradient descent. We prove convergence to a global optimizer in time polynomial in 1/ , d, and the regularity parameters. Theorem 5.3 (Theorem E.4 with regularity parameters omitted). Suppose that Φ and V are 2- homogeneous and the regularity conditions of Assumption 5.1 are satisfied. Also assume that from starting distribution ρ0, a solution to the dynamics in equation 5.2 exists. Define L? , infρ L[ρ]. Let > 0 be a desired error threshold and choose σ , exp(−d log(1/ )poly(k, L[ρ0]− L?)) and t , d 2 4 poly(log(1/ ), k, L[ρ0]− L ?), where the regularity parameters for Φ, V , and R are hidden in the poly(·). Then, perturbed Wasserstein gradient flow converges to an -approximate global minimum in t time: min 0≤t≤t L[ρt]− L? ≤ . We provide a theorem statement that includes regularity parameters in Section E.1. We prove the theorem in Section E.2. As a technical detail, Theorem 5.3 requires that a solution to the dynamics exists. We can remove this assumption by analyzing a discrete-time version of equation 5.2: ρt+1 , ρt + η(−σρt + σUd −∇ · (v[ρt]ρt)) and additionally assuming Φ and V have Lipschitz gradients. In this setting, a polynomial time convergence result also holds. We state the result in Section E.3. An implication of our Theorem 5.3 is that for infinite networks, we can optimize the weaklyregularized logistic loss in time polynomial in the problem parameters and λ−1. By Theorem 2.2, we only require λ−1 = poly(n) to approximate the maximum margin within a constant factor. Thus, for infinite networks, we can approximate the max margin within a constant factor in polynomial time. 5The relu activation is non-differentiable at 0 and hence the gradient flow is not well-defined. Chizat & Bach (2018) acknowledge this same difficulty with relu. 6 SIMULATIONS We first compare the generalization of neural networks and kernel methods for classification and regression. In Figure 2 we plot the generalization error and predicted generalization upper bounds6 of a trained neural network against a `2 kernel method with relu features as we vary n. Our data comes from a synthetic distribution generated by a neural network with 6 hidden units; we provide a detailed setup in Section F.1. For classification we plot 0-1 error, whereas for regression we plot squared error. The variance in the neural network generalization bound for classification likely occured because we did not tune learning rate and training time, so the optimization failed to find the best margin. The plots show that two-layer networks clearly outperform kernel methods in test error as n grows. However, there seems to be looseness in the bounds: the kernel generalization bound appears to stay constant with n (as predicted by our theory for regression), but the test error decreases. We also plot the dependence of the test error and margin on the hidden layer size in Figure 3 for synthetic data generated from a ground truth network with 10 hidden units and also MNIST. The plots indicate that test error is decreasing in hidden layer size while margin is increasing, as Theorem 3.3 predicts. We provide more details on the experimental setup in Section F.2. In Section F.3, we verify the convergence of a simple neural network to the max-margin solution as regularization decreases. In Section F.4, we train modified WideResNet architectures on CIFAR10 and CIFAR100. Although ResNet is not homogeneous, we still report improvements in generalization from annealing the weight decay during training, versus staying at a fixed decay rate. 7 CONCLUSION We have made the case that maximizing margin is one of the inductive biases of relu networks obtained from optimizing weakly-regularized cross-entropy loss. Our framework allows us to directly analyze generalization properties of the network without considering the optimization algorithm used to obtain it. Using this perspective, we provide a simple explanation for why over-parametrization can improve generalization. It is a fascinating question for future work to characterize other generalization properties of the max-margin solution. On the optimization side, we make progress towards understanding over-parametrized gradient descent by analyzing infinite-size neural networks. A natural direction for future work is to apply our theory to optimize the margin of finite-sized neural networks. 6We compute the leading term that is linear in the norm or inverse margin from the bounds in Proposition 3.1 and Lemmas 4.2, D.8, and D.9. A MISSING PROOFS IN SECTION 2 We first show that Lλ does indeed have a global minimizer. Claim A.1. In the setting of Theorems 2.1 and A.3, arg minΘ Lλ(Θ) exists. Proof. We will argue in the setting of Theorem 2.1 where Lλ is the multi-class cross entropy loss, because the logistic loss case is analogous. We first note that Lλ is continuous in Θ because f is continuous in Θ and the term inside the logarithm is always positive. Next, define b , infΘ Lλ(Θ) > 0. Then we note that for ‖Θ‖ > (b/λ)1/r , M , we must have Lλ(Θ) > b. It follows that inf‖Θ‖≤M Lλ(Θ) = infΘ Lλ(Θ). However, there must be a value Θλ which attains inf‖Θ‖≤M Lλ(Θ), because {Θ : ‖Θ‖ ≤ M} is a compact set and Lλ is continuous. Thus, infΘ Lλ(Θ) is attained by some Θλ. A.1 MISSING PROOFS FOR MULTI-CLASS SETTING Towards proving Theorem 2.1, we first show as we decrease λ, the norm of the solution ‖Θλ‖ grows. Lemma A.2. In the setting of Theorem 2.1, as λ→ 0, we have ‖Θλ‖ → ∞. To prove Theorem 2.1, we rely on the exponential scaling of the cross entropy: Lλ can be lower bounded roughly by exp(−‖Θλ‖γλ), but also has an upper bound that scales with exp(−‖Θλ‖γ?). By Lemma A.2, we can take large ‖Θλ‖ so the gap γ?−γλ vanishes. This proof technique is inspired by that of Rosset et al. (2004a). Proof of Theorem 2.1. For any M > 0 and Θ with γΘ , mini ( f(Θ̄;xi)−maxj 6=yi f(Θ̄;xi) ) , Lλ(MΘ) = 1 n n∑ i=1 − log exp(M afyi(Θ;xi))∑l j=1 exp(M afj(Θ;xi)) + λMr‖Θ‖r (by the homogeneity of f ) = 1 n n∑ i=1 − log 1 1 + ∑ j 6=yi exp(M a(fj(Θ;xi)− fyi(Θ;xi))) + λMr‖Θ‖r (A.1) ≤ log(1 + (l − 1) exp(−MaγΘ)) + λMr‖Θ‖r (A.2) We can also apply ∑ j 6=yi exp(M a(fj(Θ;xi) − fyi(Θ;xi))) ≥ max exp(Ma(fj(Θ;xi) − fyi(Θ;xi))) = exp γΘ in order to lower bound equation A.1 and obtain Lλ(MΘ) ≥ 1 n log(1 + exp(−MaγΘ)) + λMr‖Θ‖r (A.3) Applying equation A.2 with M = ‖Θλ‖ and Θ = Θ?, noting that ‖Θ?‖ ≤ 1, we have: Lλ(Θ ?‖Θλ‖) ≤ log(1 + (l − 1) exp(−‖Θλ‖aγ?)) + λ‖Θλ‖r (A.4) Next we lower bound Lλ(Θλ) by applying equation A.3, Lλ(Θλ) ≥ 1 n log(1 + exp(−‖Θλ‖aγλ)) + λ‖Θλ‖r (A.5) Combining equation A.4 and equation A.5 with the fact that Lλ(Θλ) ≤ Lλ(Θ?‖Θλ‖) (by the global optimality of Θλ), we have ∀λ > 0, n log(1 + (l − 1) exp(−‖Θλ‖aγ?)) ≥ log(1 + exp(−‖Θλ‖aγλ)) Recall that by Lemma A.2, as λ → 0, we have ‖Θλ‖ → ∞. Therefore, exp(−‖Θλ‖aγ?), exp(−‖Θλ‖aγλ) → 0. Thus, we can apply Taylor expansion to the equation above with respect to exp(−‖Θλ‖aγ?) and exp(−‖Θλ‖aγλ). If max{exp(−‖Θλ‖aγ?), exp(−‖Θλ‖aγλ)} < 1, then we obtain n(l − 1) exp(−‖Θλ‖aγ?) ≥ exp(−‖Θλ‖aγλ)−O(max{exp(−‖Θλ‖aγ?)2, exp(−‖Θλ‖aγλ)2}) We claim this implies that γ? ≤ lim infλ→0 γλ. If not, we have lim infλ→0 γλ < γ? , which implies that the equation above is violated with sufficiently large ‖Θλ‖ (‖Θλ‖ log(2(`− 1)n)1/a would suffice). By Lemma A.2, ‖Θλ‖ → ∞ as λ→ 0 and therefore we get a contradiction. Finally, we have γλ ≤ γ? by definition of γ?. Hence, limλ→0 γλ exists and equals γ?. Now we fill in the proof of Lemma A.2. Proof of Lemma A.2. For the sake of contradiction, we assume that ∃C > 0 such that for any λ0 > 0, there exists 0 < λ < λ0 with ‖Θλ‖ ≤ C. We will determine the choice of λ0 later and pick λ such that ‖Θλ‖ ≤ C. Then the logits (the prediction fj(Θ, xi) before softmax) are bounded in absolute value by some constant (that depends on C), and therefore the loss function − log exp(fyi (Θ;xi))∑l j=1 exp(fj(Θ;xi)) for every example is bounded from below by some constant D > 0 (depending on C but not λ.) Let M = λ−1/(r+1), we have that 0 < D ≤ Lλ(Θλ) ≤ Lλ(MΘ?) (by the optimality of Θλ) ≤ − log 1 1 + (l − 1) exp(−Maγ?) + λMr (by equation A.2) = log(1 + (l − 1) exp(−λ−a/(r+1)γ?)) + λ1/(r+1) ≤ log(1 + (l − 1) exp(−λ−a/(r+1)0 γ?)) + λ 1/(r+1) 0 Taking a sufficiently small λ0, we obtain a contradiction and complete the proof. A.2 FULL BINARY CLASSIFICATION SETTING For completeness, we state and prove our max-margin results for the setting where we fit binary labels yi ∈ {−1,+1} (as opposed to indices in [l]) and redefining f(Θ; ·) to assign a single real-valued score (as opposed to a score for each label). This lets us work with the simpler λ-regularized logistic loss: Lλ(Θ) , 1 n n∑ i=1 log(1 + exp(−yif(Θ;xi))) + λ‖Θ‖r As before, let Θλ ∈ arg minLλ(Θ), and define the normalized margin γλ by γλ , mini yif(Θ̄λ;xi). Define the maximum possible normalized margin γ? , max ‖Θ‖≤1 min i yif(Θ;xi) (A.6) Theorem A.3. Assume γ? > 0 in the binary classification setting with logistic loss. Then as λ→ 0, γλ → γ?. The proof follows via simple reduction to the multi-class case. Proof of Theorem A.3. We prove this theorem via reduction to the multi-class case with l = 2. Construct f̃ : Rd → R2 with f̃1(Θ;xi) = − 12f(Θ;xi) and f̃2(Θ;xi) = 1 2f(Θ;xi). Define new labels ỹi = 1 if yi = −1 and ỹi = 2 if yi = 1. Now note that f̃ỹi(Θ;xi)−f̃j 6=ỹi(Θ;xi) = yif(Θ;xi), so the multi-class margin for Θ under f̃ is the same as binary margin for Θ under f . Furthermore, defining L̃λ(Θ) , 1 n n∑ i=1 − log exp(f̃ỹi(Θ;xi))∑2 j=1 exp(f̃j(Θ;xi)) + λ‖Θ‖r we get that L̃λ(Θ) = Lλ(Θ), and in particular, L̃λ and Lλ have the same set of minimizers. Therefore we can apply Theorem 2.1 for the multi-class setting and conclude γλ → γ? in the binary classification setting. A.3 MISSING PROOF FOR OPTIMIZATION ACCURACY Proof of Theorem 2.2. Choose B , ( 1 γ? log (l−1)(γ?)r/a λ )1/a . We can upper bound Lλ(Θ′) by computing Lλ(Θ ′) ≤ βLλ(Θλ) ≤ βLλ(BΘ?) ≤ β log(1 + (l − 1) exp(−Baγ?)) + βλBr (by equation A.2) ≤ β(l − 1) exp(−Baγ?) + βλBr (using log(1 + x) ≤ x) ≤ β λ (γ?)r/a + βλ ( 1 γ? log (l − 1)(γ?)r/a λ )r/a ≤ β λ (γ?)r/a ( 1 + ( log (l − 1)(γ?)r/a λ )r/a) , L(UB) Furthermore, it holds that ‖Θ′‖r ≤ L (UB) λ . Now we note that Lλ(Θ ′) ≤ L(UB) ≤ 2β λ (γ?)r/a ( log (l − 1)(γ?)r/a λ )r/a ≤ 1 2n for sufficiently large c depending only on a/r. Now using the fact that log(x) ≥ x1+x ∀x ≥ −1, we additionally have the lower bound Lλ(Θ′) ≥ 1n log(1 + exp(−γ ′‖Θ′‖a)) ≥ 1n exp(−γ′‖Θ′‖a) 1+exp(−γ′‖Θ′‖a) . Since L(UB) ≤ 1, we can rearrange to get γ′ ≥ − log nLλ(Θ ′) 1−nLλ(Θ′) ‖Θ′‖a ≥ − log nL (UB) 1−nL(UB) ‖Θ′‖a ≥ − log(2nL (UB)) ‖Θ′‖a The middle inequality followed because x1−x is increasing in x for 0 ≤ x < 1, and the last because L(UB) ≤ 12n . Since − log 2nL (UB) > 0 we can also apply the bound ‖Θ′‖r ≤ L (UB) λ to get γ′ ≥ −λ a/r log 2nL(UB) (L(UB))a/r = − log ( 2nβ λ (γ?)r/a ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)) βa/r γ? ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r (by definition of L(UB)) ≥ γ ? βa/r log( (γ ?)r/a 2βnλ )( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r ︸ ︷︷ ︸ ♣ − log ( 1 + ( log (l−1)(γ ?)r/a λ )r/a) ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r ︸ ︷︷ ︸ ♥ We will first bound ♣. First note that log( (γ ?)r/a 2βnλ ) log (l−1)(γ ?)r/a λ = log (γ ?)r/a λ − log 2βn log (γ ?)r/a λ + log(l − 1) ≥ log (γ ?)r/a λ − log 2βn(l − 1) log (γ ?)r/a λ ≥ c− 3 c (A.7) where the last inequality follows from the fact that (γ ?)r/a λ ≥ n c(l − 1)c and β ≤ 2. Next, using the fact that log (γ ?)r/a λ ≥ 1 (2r/a−1)a/r , we note that( 1 + ( log (l − 1)(γ?)r/a λ )−r/a)a/r ≤ ( 1 + ( 1 (2r/a − 1)a/r )−r/a)a/r ≤ 2 (A.8) Combining equation A.7 and equation A.8, we can conclude that ♣ = log( (γ ?)r/a 2βnλ ) log (l−1)(γ ?)r/a λ ( 1 + ( log (l − 1)(γ?)r/a λ )−r/a)−a/r ≥ c− 3 2c Finally, we note that if 1 + ( log (l−1)(γ ?)r/a λ )r/a is a sufficiently large constant that depends only on a/r (which can be achieved by choosing c sufficiently large), it will follow that ♥ ≤ 110 . Thus, if c ≥ 5, we can combine our bounds on ♣ and ♥ to get that γ′ ≥ γ ? 10βa/r B MISSING PROOF OF PROPOSITION 4.1 Proposition 4.1 follows simply from applying Corollary 1 of Neyshabur et al. (2014) to a hard-margin SVM problem. For completeness, we provide another proof here. The proof of Proposition 4.1 will consist of two steps: first, show that equation 4.2 has an optimal solution with sparsity n, and second, show that sparse solutions to equation 4.2 can be mapped to a neural network with the same margin, and vice versa. The following lemma and proof are based on Lemma 14 of Tibshirani (2013). Lemma B.1. Let supp(α) , {ū : |α(ū)| > 0}. There exists an optimal solution α? to equation 4.2 with |supp(α?)| ≤ n. For the proof of this lemma, we find it convenient to work with a minimum norm formulation which we show is equivalent to equation 4.2: min α ‖α‖1 subject to yi〈α,ϕ(xi)〉 ≥ 1 ∀i (B.1) Claim B.2. Let S ⊂ L1(Sd−1) be the set of optimizers for equation 4.2, and let S′ ⊂ L1(Sd−1) be the set of optimizers for equation B.1. If equation B.1 is feasible, for any α ∈ S, αγ`1 ∈ S ′, and for any α′ ∈ S′, α ′ ‖α′‖1 ∈ S. Proof. Let opt′ denote the optimal objective for equation B.1. We note that α ′ ‖α′‖1 is feasible for equation 4.2 with objective 1opt′ , and therefore γ`1 ≥ 1 opt′ . Furthermore, 1 2γ`1 yi ∫ ū∈Sd−1 α(ū)φ(ū >xi)dū ≥ 1 ∀i, and so αγ`1 is feasible for equation B.1 with objective 1 γ`1 . Therefore, opt′ ≤ 1γ`1 . As a result, it must hold that opt ′ = 1γ`1 , which means that α ′ ‖α′‖1 is optimal for equation 4.2, and αγ`1 is optimal for equation B.1, as desired. First, note that if equation B.1 is not feasible, then γ`1 = 0 and equation 4.2 has a trivial sparse solution, the all zeros function. Thus, it suffices to show that an optimal solution to equation B.1 exists that is n-sparse, since by Lemma B.2 equation B.1 and equation 4.2 have equivalent solutions up to a scaling. We begin by taking the dual of equation B.1. Claim B.3. The dual of equation B.1 has form max λ∈Rn λ>~1 subject to ∣∣∣∣∣ n∑ i=1 λiyiφ(ū >xi) ∣∣∣∣∣ ≤ 1 ∀ū ∈ Sd−1 λi ≥ 0 For any primal optimal solution α? and dual optimal solution λ?, it must hold that n∑ i=1 λ?i yiφ(ū >xi) = sign(α ?(ū)) ⇐⇒ α?(ū) 6= 0 (B.2) Proof. The dual form can be solved for by computation. By strong duality, equation B.2 must follow from the KKT conditions. Now define the mapping v : Sd−1 → Rn with vi(ū) , yiφ(ū>xi). We will show a general result about linearly dependent v(ū) for ū ∈ supp(α?), after which we can reduce directly to the proof of Tibshirani (2013). Claim B.4. Let α? be any optimal solution. Suppose that there exists S ⊆ supp(α?) such that {v(ū) : ū ∈ S} forms a linearly dependent set, i.e.∑ ū∈S cūv(ū) = ~0 (B.3) for coefficients c. Then ∑ ū∈S cū sign(α ?(ū)) = 0. Proof. Let λ? be any dual optimal solution, then λ?>v(ū) = sign(α?(ū)) ∀ū ∈ supp(α?) by Claim B.3. Thus, we apply λ?> to both sides of equation B.3 to get the desired statement. Proof of Lemma B.1. The rest of the proof follows Lemma 14 in Tibshirani (2013). The lemma argues that if the conclusion of Claim B.4 holds and an optimal solution α? has S ⊆ supp(α?) with {v(ū) : ū ∈ S} linearly dependent, we can construct a new α′ with ‖α′‖1 = ‖α?‖1 and supp(α′) ⊂ supp(α?) (where the inclusion is strict). Thus, if we consider an optimal α? with minimal support, it must follow that {v(ū) : ū ∈ supp(α?)} is a linearly independent set, and therefore |supp(α?)| ≤ n. We can now complete the proof of Proposition 4.1. Proof of Proposition 4.1. For ease of notation, we will parametrize a two-layer network with m units by top layer weights w1, . . . , wm ∈ R and bottom layer weights u1, . . . , um ∈ Rd. As before, we use Θ to refer to the collection of parameters, so the network computes the real-valued function f(Θ;x) = m∑ j=1 wjφ(u > j x) Note that we simply renamed the variables from the parametrization of equation 3.1. We first apply Lemma B.1 to conclude that equation 4.2 admits a n-sparse optimal solution α?. Because of sparsity, we can now abuse notation and treat α? as a real-valued function such that∑ ū∈supp(α?) |α?(ū)| ≤ 1. We construct Θ corresponding to a two-layer network with m ≥ n hidden units and normalized margin at least γ`12 . For clarity, we let W correspond to the top layer weights and U correspond to the bottom layer weights. For every ū ∈ supp(α), we let Θ have a corresponding hidden unit j with (wj , uj) = ( sign(α?(ū)) √ |α?(ū)| 2 , √ |α?(ū)| 2 ū ) , and set the remaining hidden units to ~0. This is possible because m ≥ n. Now f(Θ;x) = m∑ j=1 wjφ(u > j x) = 1 2 ∑ ū∈supp(α?) α?(ū)φ(ū>x) Furthermore, ‖Θ‖22 = m∑ j=1 w2j + ‖uj‖22 = ∑ ū∈supp(α) |α?(ū)| 2 + |α?(ū)| 2 ‖ū‖22 = ∑ ū∈supp(α) |α?(ū)| ≤ 1 Thus it follows that Θ has normalized margin at least γ`1/2, so γ ?,m ≥ γ`1/2. To conclude, we show that γ?,m ≤ γ`1/2. Let Θ?,m denote the parameters obtaining optimal m-unit margin γ?,m with hidden units (w?,mj , u ?,m j ) for j ∈ [m]. We can construct α to put a scaled delta mass of 2w?,mj ‖u ?,m j ‖2 on ū ?,m j for j ∈ [m]. It follows that ‖α‖1 = m∑ j=1 2|w?,mj |‖u ?,m j ‖2 ≤ m∑ j=1 w?,mj 2 + ‖u?,mj ‖ 2 2 = ‖Θ?,m‖22 ≤ 1 Furthermore, ∫ Sd−1 α(ū)φ(ū>x) = 2 m∑ j=1 w?,mj ‖u ?,m j ‖2φ((ū ?,m j ) >x) = 2 m∑ j=1 w?,mj φ(u ?,m j > x) = 2f(Θ?,m;x) Thus, α is a feasible solution to equation 4.2 with objective value at least 2γ?,m. Therefore, γ`1 ≥ 2γ?,m, so γ?,m = γ`1/2. C RADEMACHER COMPLEXITY AND GENERALIZATION ERROR We prove the generalization error bounds stated in Proposition 3.1 and Lemma 4.2 via Rademacher complexity and margin theory. Assume that our data X,Y are drawn i.i.d. from ground truth distribution pdata supported on X × Y . For some hypothesis classF of real-valued functions, we define the empirical Rademacher complexity R̂(F) as follows: R̂(F) , 1 n E i [ sup f∈F n∑ i=1 if(xi) ] where i are independent Rademacher random variables. For a classifier f , following the notation of Section 3 we will use L(f) , Pr(x,y)∼pdata(yf(x) ≤ 0) to denote the population 0-1 loss of the classifier f . The following classical theorem (Koltchinskii et al., 2002), (Kakade et al., 2009) bounds generalization error in terms of the Rademacher complexity and margin loss. Theorem C.1 (Theorem 2 of Kakade et al. (2009)). Let (xi, yi)ni=1 be drawn iid from pdata. We work in the binary classification setting, so Y = {−1, 1}. Assume that for all f ∈ F , we have supx∈X f(x) ≤ C. Then with probability at least 1− δ over the random draws of the data, for every γ > 0 and f ∈ F , L(f) ≤ 1 n n∑ i=1 1(yif(xi) < γ) + 4R̂(F) γ + √ log log2 4C γ n + √ log(1/δ) 2n C.1 PROOF OF PROPOSITION 3.1 We will prove Proposition 3.1 by applying the Rademacher complexity bounds of Golowich et al. (2017) with Theorem C.1. First, we show the following lemma bounding the generalization of neural networks whose weight matrices have bounded Frobenius norms. Lemma C.2. Define the hypothesis class FK over depth-K neural networks by FK = { f(Θ; ·) : ‖Wj‖F ≤ 1√ K ∀j } Let C , supx∈X ‖x‖2. Recall that L(Θ) denotes the 0-1 population loss L(f(Θ; ·)). Then for any f(Θ; ·) ∈ FK classifying the training data correctly with unnormalized margin γΘ , mini yif(Θ;xi) > 0, with probability at least 1− δ, L(Θ) . C γΘK(K−1)/2 √ n + √ log log2 4C γΘ n + √ log(1/δ) n (C.1) Note the dependence on the unnormalized margin rather than the normalized margin. Proof. We first claim that supf(Θ;·)∈FK supx∈X f(Θ;x) ≤ C. To see this, for any f(Θ; ·) ∈ FK , f(Θ;x) = WKφ(· · ·φ(W1x) · · · ) ≤ ‖WK‖F ‖φ(WK−1φ(· · ·φ(W1x) · · · )‖2 ≤ ‖WK‖F ‖WK−1φ(· · ·φ(W1x) · · · )‖2 (since φ is 1-Lipschitz and φ(0) = 0, so φ performs a contraction) < ‖x‖2 ≤ C (repeatedly applying this argument and using ‖Wj‖F < 1) Furthermore, by Theorem 1 of Golowich et al. (2017), R̂(FK) has upper bound R̂(FK) . C K(K−1)/2 √ n Thus, we can apply Theorem C.1 to conclude that for all f(Θ; ·) ∈ FK and all γ > 0, with probability 1− δ, L(Θ) . 1 n n∑ i=1 1(yif(Θ;xi) < γ) + C γK(K−1)/2 √ n + √ log log2 4C γ n + √ log(1/δ) n In particular, by definition choosing γ = γΘ makes the first term on the LHS vanish and gives the statement of the lemma. Proof of Proposition 3.1. Given parameters Θ = (W1, . . . ,WK), we first construct parameters Θ̃ = (W̃1, . . . , W̃K) such that f(Θ̄; ·) and f(Θ̃; ·) compute the same function, and ‖W̃1‖2F = ‖W̃2‖2F = · · · = ‖W̃K‖2F ≤ 1K . To do this, we set W̃j = ( ∏K k=1 ‖Wk‖F )1/k ‖Wj‖F ‖Θ‖F Wj By construction ‖W̃j‖2F = ( ∏K k=1 ‖Wk‖2F )1/k ‖Θ‖2F = ( ∏K k=1 ‖Wk‖2F )1/k∑K k=1 ‖Wk‖2F ≤ 1 k (by the AM-GM inequality) Furthermore, we also have f(Θ̃;x) = W̃Kφ(· · ·φ(W̃1x) · · · ) = K∏ j=1 ( ∏K k=1 ‖Wk‖F )1/k ‖Wj‖F ‖Θ‖F WKφ(· · ·φ(W1x) · · · ) (by the homogeneity of φ) = 1 ‖Θ‖KF f(Θ;x) = f ( Θ ‖Θ‖F ;x ) (since f is K-homogeneous in Θ) = f(Θ̄;x) Now we note that by construction, L(Θ) = L(Θ̃). Now f(Θ̃; ·) must also classify the training data perfectly, has unnormalized margin γ, and furthermore f(Θ̃; ·) ∈ FK . As a result, Lemma C.2 allows us to conclude the desired statement. To conclude Corollary 3.2, we apply the above on Θλ,M and use Theorem A.3. C.2 PROOF OF KERNEL GENERALIZATION BOUNDS Let F2,φB denote the class of `2-bounded linear functionals in lifted feature space: F 2,φ B , {x 7→ 〈α,ϕ(x)〉 : α ∈ L2(Sd−1), ‖α‖2 ≤ B}. We abuse notation and write α ∈ F2,φB to indicate a linear functional from F2,φB . As before, we will use L(α) to indicate the 0-1 population loss of the classifier x 7→ 〈α,ϕ(x)〉 and let C , supx∈X ‖x‖2 be an upper bound on the norm of the data. We focus on analyzing the Rademacher complexity R̂(F2,φB ), mirroring derivations done in the past (Bartlett & Mendelson, 2002). We include our derivations here for completeness. Lemma C.3. R̂(F2,φB ) ≤ 1 nB √∑n i=1 ‖ϕ(xi)‖22. Proof. We write R̂(F2,φB ) = 1 n E i [ sup α∈F2,φB 〈α, n∑ i=1 iϕ(xi)〉 ] ≤ 1 n E i [ sup α∈F2,φB ‖α‖2 ∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 ] ≤ 1 n B · E i [∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 ] ≤ 1 n B √√√√√E i ∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 2 (via Jensen’s inequality) ≤ 1 n B √√√√√E i n∑ i=1 n∑ j=1 i j〈ϕ(xi), ϕ(xi)〉 ≤ 1 n B √√√√ n∑ i=1 ‖ϕ(xi)‖22 (terms where i 6= j cancel out) As an example, we can apply this bound to relu features: Corollary C.4. Suppose that φ is the relu activation. Let κ , Vol(Sd−1). Then R̂(F2,φB ) . B‖X‖F √ κ n √ d ≤ BC √ κ√ dn . Proof. We first show that ‖ϕ(xi)‖22 = Θ ( κ d‖xi‖ 2 2 ) . We can compute ‖ϕ(xi)‖22 = Vol(Sd−1)Eū∼Sd−1 [relu(ū>xi)2] = κ d Eū∼Sd−1 [relu( √ dū>xi) 2] = κ d 1 M2 Eu∼N (0,Id×d)[relu(u Txi) 2] (M2 is the second moment of N (0, 1)) = Θ (κ d ‖xi‖22 ) (C.2) where the last line uses the computation provided in Lemma A.1 by Du et al. (2017). Now we plug this into Lemma C.3 to get the desired bound. We will now prove Lemma 4.2. Proof of Lemma 4.2. From equation C.2, we first obtain supx∈X ‖ϕ(x)‖2 . C √ κ d . Denote the optimizer for equation 4.3 by α`2 . Note that √ κα`2 ∈ F 2,φ 1 , and furthermore L(α`2) = L( √ κα`2). Since √ κα`2 has unnormalized margin √ κγ`2 , we apply Theorem C.1 on margin √ κγ`2 and hypothesis class F2,φ1 to get with probability 1− δ, L`2-svm = L( √ κα`2) ≤ 4R̂(F2,φ1 )√ κγ`2 + √ log log2 4 supx∈X ‖ϕ(x)‖2√ κγ`2 n + √ log(1/δ) 2n . C γ`2 √ dn + √√√√ log max{log2 C√dγ`2 , 2} n + √ log(1/δ) n (applying Corollary C.4) D MISSING PROOFS FOR COMPARISON TO KERNEL METHODS D.1 CLASSIFICATION In this section we will complete a proof of Theorem 4.3. Recall the construction of the distribution D provided in Section 4. We first provide a classifier of this data with small `1 norm. Lemma D.1. In the setting of Theorem 4.3, we have that γ`1 ≥ √ 2 4 . Proof. Consider the network f(x) = 14 ( (x>(e1 +e2)/ √ 2)+ +(x >(−e1−e2)/ √ 2)+− (x>(−e1 + e2)/ √ 2)+ − (x>(e1 − e2)/ √ 2)+ ) . The attained margin γ = √ 2 4 , so γ`1 ≥ √ 2 4 . Now we will upper bound the margin attainable by the `2 SVM. Lemma D.2 (Margin upper bound tool). In the setting of Theorem 4.3, we have γ`2 ≤ 1√ κ · ∥∥∥∥∥ 1n n∑ i=1 ϕ(xi)yi ∥∥∥∥∥ 2 Proof. By the definition of γ`2 , we have that for any α with √ κ‖α‖2 ≤ 1, we have γ`2 ≤ max√ κ‖α‖2≤1 1 n n∑ i=1 〈α, yiϕ(xi)〉 Setting α = 1√ κ 1 n ∑n i=1 ϕ(xi)yi/‖ 1 n ∑n i=1 ϕ(xi)yi‖2 completes the proof. (Attentive readers may realize that this is equivalent to setting the dual variable of the convex program 4.3 to all 1’s function.) Lemma D.3. In the setting of Theorem 4.3, let (xi, yi)ni=1 be n i.i.d samples and corresponding labels from D. Let ϕ be defined in equation 4.1 with φ = relu. With high probability (at least 1− dn−10), we have ∥∥∥∥∥ 1n n∑ i=1 ϕ(xi)yi ∥∥∥∥∥ 2 . √ κ/n log n+ √ κ/d Proof. Let Wi = ϕ(xi)yi. We will bound several quantities regarding Wi’s. In the rest of the proof, we will condition on the event E that ∀i, ‖xi‖22 . d log n. Note that E is a high probability event and conditioned on E, xi’s are still independent. We omit the condition on E in the rest of the proof for simplicity. We first show that assuming the following three inequalities that the conclusion of the Lemma follows. 1. ∀i, ‖Wi‖22 . κ log n . 2. σ2 , Var[ ∑ iWi] , ∑n i=1 E[‖Wi − EWi‖22] . nκ log n 3. ‖E [ ∑ Wi] ‖2 . √ κn/d. By bullets 1, 2, and Bernstein inequality, we have that with probability at least 1− dn−10 over the randomness of the data (X,Y ),∥∥∥∥∥ n∑ i=1 Wi − E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 . √ κ log1.5 n+ √ nκ log2 n . √ nκ log2 n By bullet 3 and equation above, we complete the proof with triangle inequality:∥∥∥∥∥ n∑ i=1 Wi ∥∥∥∥∥ 2 ≤ ∥∥∥∥∥E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 + √ nκ log2 n . √ nκ log2 n+ √ κn/d Therefore, it suffices to prove bullets 1, 2 and 3. Note that 2 is a direct corollary of 1 so we will only prove 1 and 3. We start with 3: By the definition of the `2 norm in L2(Sd−1) and the independence of (xi, yi)’s, we can rewrite∥∥∥∥∥E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 2 = κ · n2 E ū∼Sd−1 [ E (x,y)∼D ϕ(x)[ū] · y ]2 (D.1) Let ū = (ū1, . . . , ūd) and ū−2 = (ū3, . . . , ūd) ∈ Rd−2, and define τ
1. What is the focus and contribution of the paper regarding implicit bias and regularized cross entropy loss? 2. What are the strengths and weaknesses of the proposed techniques and ideas in the paper? 3. Do you have any questions or concerns regarding the novelty and comparison with prior works in sections 2 and 3.1? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. What are the main proof ideas of Theorem 4.3, and why is the perturbation needed? 6. What is the size of the network used in the experiments of Figure 3, and how does it compare to the ground truth network? 7. Is there any confusion or concern regarding the product of Frobenius norms being replaced with a sum in the new generalization bound (Proposition 3.1)?
Review
Review This paper studies the implicit bias of minimizers of a regularized cross entropy loss of a two-layer network with ReLU activations. By combining several results, the authors obtain a generalization upper bound which does not increase with the network size. Furthermore, they show that the maximum normalized margin is, up to a scaling factor, the l1 svm margin over the lifted feature space of an infinite-size network. Finally, in a setting of infinite-sized networks, it is proved that perturbed Wasserstein gradient flow finds a global minimum in polynomial time. I think that the results are interesting and relevant to current efforts of understanding neural networks. The techniques and ideas seem promising and may be applied in more general settings. The paper is mostly clearly written, but there are some issues which I outline below. 1. It is not clear what is the novelty in sections 2 and 3.1 except the combination of all the results to get a generalization bound which does not increase with network size (which on its own is non-trivial). Specifically, a. What is the technical contribution in Theorem 2.1 beyond the results of the two papers of Rosset et al. (journal paper and the NIPS paper which was mentioned in the comment on missing prior work)? b. How does Theorem 3.1 compare with previous Rademacher bounds for neural networks which are based on the margin? In Neyshabur et al. (2018), it is shown that margin-based generalization bounds empirically increase with network size. Does this hold for the bound in Theorem 3.1? 2. In the work of Soudry et al. (2018) section 4.3, they consider deep networks with an unregularized loss and show that gradient descent converges to an l2 max margin solution under various assumptions. What is the connection between this result and the l1 max margin result in section 3.3? 3. What are the main proof ideas of Theorem 4.3? Why is the perturbation needed? 4. What is the size of the network that was trained in Section 5 in the experiments of Figure 3? Only the size of the ground truth network is mentioned. ---------Revision------------ I have read the author's response and other reviews. I am not changing the current review. I have one technical question. In the new generalization bound (Proposition 3.1), the authors claim that the product of Frobenius norms is replaced with a sum. However, I don't see any sum in the proof. Could the authors please clarify this?
ICLR
Title On the Margin Theory of Feedforward Neural Networks Abstract Past works have shown that, somewhat surprisingly, over-parametrization can help generalization in neural networks. Towards explaining this phenomenon, we adopt a margin-based perspective. We establish: 1) for multi-layer feedforward relu networks, the global minimizer of a weakly-regularized cross-entropy loss has the maximum normalized margin among all networks, 2) as a result, increasing the over-parametrization improves the normalized margin and generalization error bounds for deep networks. In the case of two-layer networks, an infinite-width neural network enjoys the best generalization guarantees. The typical infinite feature methods are kernel methods; we compare the neural net margin with that of kernel methods and construct natural instances where kernel methods have much weaker generalization guarantees. We validate this gap between the two approaches empirically. Finally, this infinite-neuron viewpoint is also fruitful for analyzing optimization. We show that a perturbed gradient flow on infinite-size networks finds a global optimizer in polynomial time. 1 INTRODUCTION In deep learning, over-parametrization refers to the widely-adopted technique of using more parameters than necessary (Krizhevsky et al., 2012; Livni et al., 2014). Both computationally and statistically, over-parametrization is crucial for learning neural nets. Controlled experiments demonstrate that over-parametrization eases optimization by smoothing the non-convex loss surface (Livni et al., 2014; Sagun et al., 2017). Statistically, increasing model size without any regularization still improves generalization even after the model interpolates the data perfectly (Neyshabur et al., 2017b). This is surprising given the conventional wisdom on the trade-off between model capacity and generalization. In the absence of an explicit regularizer, algorithmic regularization is likely the key contributor to good generalization. Recent works have shown that gradient descent finds the minimum norm solution fitting the data for problems including logistic regression, linearized neural networks, and matrix factorization (Soudry et al., 2018; Gunasekar et al., 2018b; Li et al., 2018; Gunasekar et al., 2018a; Ji & Telgarsky, 2018). Many of these proofs require a delicate analysis of the algorithm’s dynamics, and some are not fully rigorous due to assumptions on the iterates. To the best of our knowledge, it is an open question to prove analogous results for even two-layer relu networks. (For example, the technique of Li et al. (2018) on two-layer neural nets with quadratic activations still falls within the realm of linear algebraic tools, which apparently do not suffice for other activations.) We propose a different route towards understanding generalization: making the regularization explicit. The motivations are: 1) with an explicit regularizer, we can analyze generalization without fully understanding optimization; 2) it is unknown whether gradient descent provides additional implicit regularization beyond what `2 regularization already offers; 3) on the other hand, with a sufficiently weak `2 regularizer, we can prove stronger results that apply to multi-layer relu networks. Additionally, explicit regularization is perhaps more relevant because `2 regularization is typically used in practice. Concretely, we add a norm-based regularizer to the cross entropy loss of a multi-layer feedforward neural network with relu activations. We show that the global minimizer of the regularized objective achieves the maximum normalized margin among all the models with the same architecture, if the regularizer is sufficiently weak (Theorem 2.1). Informally, for models with norm 1 that perfectly classify the data, the margin is the smallest difference across all datapoints between the classifier score for the true label and the next best score. We are interested in normalized margin because its inverse bounds the generalization error (see recent work (Bartlett et al., 2017; Neyshabur et al., 2017a; 2018; Golowich et al., 2017) or Proposition 3.1). Our work explains why optimizing the training loss can lead to parameters with a large margin and thus, better generalization error (see Corollary 3.2). We further note that the maximum possible margin is non-decreasing in the width of the architecture, and therefore the generalization bound of Corollary 3.2 can only improve as the size of the network grows (see Theorem 3.3). Thus, even if the dataset is already separable, it could still be useful to increase the width to achieve larger margin and better generalization. At a first glance, it might seem counterintuitive that decreasing the regularizer is the right approach. At a high level, we show that the regularizer only serves as a tiebreaker to steer the model towards choosing the largest normalized margin. Our proofs are simple, oblivious to the optimization procedure, and apply to any norm-based regularizer. We also show that an exact global minimum is unnecessary: if we approximate the minimum loss within a constant factor, we obtain the max-margin within a constant factor (Theorem 2.2). To better understand the neural network max-margin, in Section 4 we compare the max-margin two-layer network obtained by optimizing both layers jointly to kernel methods corresponding to fixing random weights for the hidden layer and solving a 2-norm max-margin on the top layer. We design a simple data distribution (Figure 1) where neural net margin is large but the kernel margin is small. This translates to an Ω( √ d) factor gap between the generalization error bounds for the two approaches and demonstrates the power of neural nets compared to kernel methods. We experimentally confirm that a gap does indeed exist. In the setting of two-layer networks, we also study how over-parametrization helps optimization. Prior works (Mei et al., 2018; Chizat & Bach, 2018; Sirignano & Spiliopoulos, 2018; Rotskoff & Vanden-Eijnden, 2018) show that gradient descent on two-layer networks becomes Wasserstein gradient flow over parameter distributions in the limit of infinite neurons. For this setting, we prove that perturbed Wasserstein gradient flow finds a global optimizer in polynomial time. Finally, we empirically validate several claims made in this paper. First, we confirm that neural networks do generalize better than kernel methods. Second, we show that for two-layer networks, the test error decreases and margin increases as the hidden layer grows, as predicted by our theory. 1.1 ADDITIONAL RELATED WORK Zhang et al. (2016) and Neyshabur et al. (2017b) show that neural network generalization defies conventional explanations and requires new ones. Neyshabur et al. (2014) initiate the search for the “inductive bias” of neural networks towards solutions with good generalization. Recent papers (Hardt et al., 2015; Brutzkus et al., 2017; Chaudhari et al., 2016) study inductive bias through training time and sharpness of local minima. Neyshabur et al. (2015a) propose a new steepest descent algorithm in a geometry invariant to weight rescaling and show that this improves generalization. Morcos et al. (2018) relate generalization in deep nets to the number of “directions” in the neurons. Other papers (Gunasekar et al., 2017; Soudry et al., 2018; Nacson et al., 2018; Gunasekar et al., 2018b; Li et al., 2018; Gunasekar et al., 2018a) study implicit regularization towards a specific solution. Ma et al. (2017) show that implicit regularization can help gradient descent avoid overshooting optima. Rosset et al. (2004a;b) study logistic regression with a weak regularization and show convergence to the max margin solution. We adopt their techniques and extend their results. A line of work initiated by Neyshabur et al. (2015b) has focused on deriving tighter norm-based Rademacher complexity bounds for deep neural networks (Bartlett et al., 2017; Neyshabur et al., 2017a; Golowich et al., 2017) and new compression based generalization properties (Arora et al., 2018b). Dziugaite & Roy (2017) manage to compute non-vacuous generalization bounds from PAC-Bayes bounds. Neyshabur et al. (2018) investigate the Rademacher complexity of two-layer networks and propose a bound that is decreasing with the distance to initialization. Liang & Rakhlin (2018) and Belkin et al. (2018) study the generalization of kernel methods. On the optimization side, Soudry & Carmon (2016) explain why over-parametrization can remove bad local minima. Safran & Shamir (2016) show that over-parametrization can improve the quality of the random initialization. Haeffele & Vidal (2015), Nguyen & Hein (2017), and Venturi et al. (2018) show that for sufficiently overparametrized networks, all local minima are global, but do not show how to find these minima via gradient descent. Du & Lee (2018) show that for two-layer networks with quadratic activations, all second-order stationary points are global minimizers. Arora et al. (2018a) interpret over-parametrization as a means of implicit acceleration during optimization. Mei et al. (2018), Chizat & Bach (2018), and Sirignano & Spiliopoulos (2018) take a distributional view of over-parametrized networks. Chizat & Bach (2018) show that Wasserstein gradient flow converges to global optimizers under structural assumptions. We extend this to a polynomial-time result. 1.2 NOTATION Let R denote the set of real numbers. We will use ‖·‖ to indicate a general norm, with ‖·‖1, ‖·‖2, ‖·‖∞ denoting the `1, `2, `∞ norms on finite dimensional vectors, respectively, and ‖ · ‖F denoting the Frobenius norm on a matrix. In general, we use ¯ on top of a symbol to denote a unit vector: when applicable, ū , u/‖u‖, where the norm ‖ · ‖ will be clear from context. Let Sd−1 , {ū ∈ Rd : ‖ū‖2 = 1} be the unit sphere in d dimensions. Let Lp(Sd−1) be the space of functions on Sd−1 for which the p-th power of the absolute value is Lebesgue integrable. For α ∈ Lp(Sd−1), we overload notation and write ‖α‖p , (∫ Sd−1 |α(ū)| pdū )1/p . Additionally, for α1 ∈ L1(Sd−1) and α2 ∈ L∞(Sd−1) or α1, α2 ∈ L2(Sd−1), we can define 〈α1, α2〉 , ∫ Sd−1 α1(ū)α2(ū)dū < ∞. Furthermore, we will use Vol(Sd−1) , ∫ Sd−1 1dū. Throughout this paper, we reserve the symbol X = [x1, . . . , xn] to denote the collection of datapoints (as a matrix), and Y = [y1, . . . , yn] to denote labels. We use d to denote the dimension of our data. We often use Θ to denote the parameters of a prediction function f , and f(Θ;x) to denote the prediction of f on datapoint x. We will use the notation .,& to mean less than or greater than up to a universal constant, respectively. Unless stated otherwise, O(·),Ω(·) denote some universal constant in upper and lower bounds, respectively. The notation poly denotes a universal constant-degree polynomial in the arguments. 2 WEAK REGULARIZER GUARANTEES MAX MARGIN SOLUTIONS In this section, we will show that when we add a weak regularizer to cross-entropy loss with a positive-homogeneous prediction function, the normalized margin of the optimum converges to some max-margin solution. As a concrete example, feedforward relu networks are positive-homogeneous. Let l be the number of labels, so the i-th example has label yi ∈ [l]. We work with a family F of prediction functions f(Θ; ·) : Rd → Rl that are a-positive-homogeneous in their parameters for some a > 0: f(cΘ;x) = caf(Θ;x),∀c > 0. We additionally require that f is continuous in Θ. For some general norm ‖ · ‖, we study the λ-regularized cross-entropy loss Lλ, defined as Lλ(Θ) , n∑ i=1 − log exp(fyi(Θ;xi))∑l j=1 exp(fj(Θ;xi)) + λ‖Θ‖r (2.1) for fixed r > 0. Let Θλ ∈ arg minLλ(Θ).1 We define the normalized margin of Θλ as: γλ , min i ( fyi(Θ̄λ;xi)−max j 6=yi fj(Θ̄λ;xi) ) (2.2) Define the ‖ · ‖-max normalized margin as γ? , max ‖Θ‖≤1 [ min i ( fyi(Θ;xi)−max j 6=yi fj(Θ;xi) )] and let Θ? be a parameter achieving this maximum. We show that with sufficiently small regularization level λ, the normalized margin γλ approaches the maximum margin γ?. Our theorem and proof are inspired by the result of Rosset et al. (2004a;b), who analyze the special case when f is a linear predictor. In contrast, our result can be applied to non-linear f as long as f is homogeneous. Theorem 2.1. Assume the training data is separable by a network f(Θ?; ·) ∈ F with an optimal normalized margin γ? > 0. Then, the normalized margin of the global optimum of the weaklyregularized objective (equation 2.1) converges to γ? as the strength of the regularizer goes to zero. Mathematically, let γλ be defined in equation 2.2. Then γλ → γ? as λ→ 0 1We formally show that Lλ has a minimizer in Claim A.1 of Section A. An intuitive explanation for our result is as follows: because of the homogeneity, the loss L(Θλ) roughly satisfies the following (for small λ, and ignoring problem parameters such as n): Lλ(Θλ) ≈ exp(−‖Θλ‖aγλ) + λ‖Θλ‖r Thus, the loss selects parameters with larger margin, while the regularization favors parameters with a smaller norm. The full proof of the theorem is deferred to Section A.1. Theorem 2.1 applies to feedforward relu networks and states that global minimizers of the weaklyregularized loss will obtain a maximum margin among all networks of the given architecture. By considering global minimizers, Theorem 2.1 provides a framework for directly analyzing generalization properties of the solution without considering details of the optimization algorithm. In Section 3 we leverage this framework and existing generalization bounds (Golowich et al., 2017) to provide a clean argument that over-parameterization can improve generalization. We can also provide an analogue of Theorem 2.1 for the binary classification setting. For this setting, our prediction is now a single real output and we train using logistic loss. We provide formal definitions and results in Section A.2. Our study of the generalization properties of the max-margin (see Section 3 and Section 4) is based in this setting. 2.1 OPTIMIZATION ACCURACY Since Lλ is typically hard to optimize exactly for neural nets, we study how accurately we need to optimize Lλ to obtain a margin that approximates γ? up to a constant. The following theorem shows that it suffices to find Θ′ achieving a constant factor multiplicative approximation of Lλ(Θλ), where λ is some sufficiently small polynomial in n, l, γ?. Though our theorem is stated for the general multi-class setting, it also applies for binary classification. We provide the proof in Section A.3. Theorem 2.2. In the setting of Theorem 2.1, suppose that we choose λ = exp(−(2r/a − 1)−a/r) (γ ?)r/a nc(l − 1)c for sufficiently large c (that only depends on r/a). For β ≤ 2, let Θ′ denote a β-approximate minimizer of Lλ, so Lλ(Θ′) ≤ βLλ(Θλ). Denote the normalized margin of Θ′ by γ′. Then γ′ ≥ γ ? 10 · βa/r . 3 GENERALIZATION PROPERTIES OF A MAXIMUM MARGIN NEURAL NETWORK In Section 2 we showed that optimizing a weakly-regularized logistic loss leads to the maximum normalized margin. We now study the direct implications of this result on the generalization properties of the solution. Specifically, we use existing Rademacher complexity bounds of Golowich et al. (2017) to present a generalization bound that depends on the network architecture only through the inverse `2-normalized margin and depth of the network (see Proposition 3.1). Next, we combine this bound with Theorem 2.1 to conclude that parameters obtained by optimizing logistic loss with weak `2-regularization will have a generalization bound that scales with the inverse of the maximum possible margin and depth. Finally, we note that the maximum possible margin can only increase as the size of the network grows, which suggests that increasing the size of the network improves the generalization of the solution (see Theorem 3.3). We consider depth-K neural networks with 1-Lipschitz, 1-positive-homogeneous activation φ for K ≥ 2. Suppose that the collection of parameters Θ is given by matrices W1, . . . ,WK . The K-layer network will compute a real-valued score f(Θ;x) ,WKφ(WK−1φ(· · ·φ(W1x) · · · )) (3.1) where we overload notation to let φ(·) denote the element-wise application of the activation φ. Let mi denote the size of the i-th hidden layer, so W1 ∈ Rm1×d,W2 ∈ Rm2×m1 , · · · ,WK ∈ R1×mK−1 . We will letM , (m1, . . . ,mK−1) denote the sequence of hidden layer sizes. We will focus on `2-regularized loss. The weakly-regularized logistic loss of the depth-K architecture with hidden layer sizesM is therefore Lλ,M(Θ) , 1 n n∑ i=1 log(1 + exp(−yif(Θ;xi))) + λ‖Θ‖2F (3.2) We note that f is K-homogeneous in Θ, so the results of Section 2 apply to Lλ,M.2 Following our conventions from Section 2, we denote the optimizer of Lλ,M by Θλ,M, the normalized margin of Θλ,M by γλ,M, the max-margin solution by Θ?,M, and the max-margin by γ?,M. Our notation emphasizes the architecture of the network. Since the classifier f now predicts a single real value, we need to redefine γλ,M , min i yif(Θ̄λ,M;xi) γ?,M , max ‖Θ‖2≤1 min i yif(Θ;xi) When the data is not separable by a neural network with architectureM, we define γ?,M to be zero. Recall that X = [x1, . . . , xn] denotes the matrix with all the data points as columns, and Y = [y1, . . . , yn] denotes the labels. We sample X and Y i.i.d. from the data generating distribution pdata, which is supported on X × {−1,+1}. We can define the population 0-1 loss and training 0-1 loss of the network parametrized by Θ by L(Θ) = Pr (x,y)∼pdata [yf(Θ;x) ≤ 0] Let C , supx∈X ‖x‖2 be an upper bound on the norm of a single datapoint. Proposition 3.1 shows that the generalization error only depends on the parameters through the inverse of the margin on the training data. We obtain Proposition 3.1 by applying Theorem 1 of Golowich et al. (2017) with the standard technique of using margin loss to bound classification error. There exist other generalization bounds which depend on the margin and some normalization (Neyshabur et al., 2015b; 2017a; Bartlett et al., 2017; Neyshabur et al., 2018); we choose the bounds of Golowich et al. (2017) because they fit well with `2 normalization. In the two-layer case K = 2, the bound below also follows from Neyshabur et al. (2015b). Proposition 3.1. [Straightforward consequence of Golowich et al. (2017, Theorem 1)] Suppose φ is 1-Lipschitz and 1-positive-homogeneous. For any depth-K network f(Θ; ·) separating the data with normalized margin γ , mini yif(Θ̄;xi) > 0, with probability at least 1− δ over the draw of X,Y , L(Θ) . C γK(K−1)/2 √ n + (γ) (3.3) where (γ) , √ log log2 4C γ n + √ log(1/δ) n . Note that (γ) is typically small, and thus the above bound mainly scales with C γK(K−1)/2 √ n . 3 For completeness, we state the proof in Section C.1. By combining this bound with our Theorem 2.1 we can conclude that optimizing weakly-regularized logistic loss gives us generalization error bounds that depend on the maximum possible margin of a network with the given architecture. Corollary 3.2. In the setting of Proposition 3.1, with probability 1− δ, lim sup λ→0 L(Θλ,M) . C γ?,MK(K−1)/2 √ n + (γ?,M) (3.4) where (γ) is defined as in Proposition 3.1. Above we implicitly assume γ?,M > 0, since otherwise the right hand side of the bound is vacuous. 2Although Theorem 2.1 is written in the language of multi-class prediction where the classifier outputs l ≥ 2 scores, the results translate to single-output binary classification. See Section A.2. 3Although the 1 K(K−1)/2 factor of equation 3.3 decreases with depth K, the margin γ will also tend to decrease as the constraint ‖Θ̄‖F ≤ 1 becomes more stringent. By applying Theorem 2.2 with Proposition 3.1, we can also conclude that optimizing Lλ,M within a constant factor gives a margin, and therefore generalization bound, approximating the best possible. One consequence of Corollary 3.2 is that optimizing weakly-regularized logistic loss results in the best possible generalization bound out of all models with the given architecture. This indicates that the widely used algorithm of optimizing deep networks with `2-regularized logistic loss has an implicit bias towards solutions with good generalization. Next, we observe that the maximum normalized margin is non-decreasing with the size of the architecture. Formally, for two depth-K architecturesM = (m1, . . . ,mK−1) andM′ = (m′1, . . . ,m′K−1), we sayM ≤ M′ if mi ≤ m′i ∀i = 1, . . .K − 1. Theorem 3.3 states that ifM ≤ M′, then the max-margin over networks with architecture M′ is at least the max-margin over networks with architectureM. Theorem 3.3. Recall that γ?,M denotes the maximum normalized margin of a network with architectureM. IfM≤M′, we have γ?,M ≤ γ?,M′ . As a important consequence, the generalization error bound of Corollary 3.2 forM′ is at least as good as that forM. This theorem is simple to prove and follows because we can directly implement any network of architectureM using one of architectureM′, ifM ≤M′. This can explain why additional overparameterization has been empirically observed to improve generalization in two-layer networks (Neyshabur et al., 2017b): the margin does not decrease with a larger network size, and therefore Corollary 3.2 gives a better generalization bound. In Section 6, we provide empirical evidence that the test error decreases with larger network size while the margin is non-decreasing. The phenomenon in Theorem 3.3 contrasts with standard `2-normalized linear prediction. In this setting, adding more features increases the norm of the data, and therefore the generalization error bounds could also increase. On the other hand, Theorem 3.3 shows that adding more neurons (which can be viewed as learned features) can only improve the generalization of the max-margin solution. 4 NEURAL NET MAX-MARGIN VS. KERNEL METHODS We will continue our study of the max-margin neural network via comparison against kernel methods, a context in which margins have already been extensively studied. We show that two-layer networks can obtain a larger margin, and therefore better generalization guarantees, than kernel methods. Our comparison between the two methods is motivated by an equivalence between the `2 max-margin of an infinite-width two-layer network and the `1-SVM (Zhu et al., 2004) over the lifted feature space defined by the activation function applied to all possible hidden units (Neyshabur et al., 2014; Rosset et al., 2007; Bengio et al., 2006). The kernel method corresponds to the `2-SVM in this same feature space, and is equivalent to fixing random hidden layer weights and solving an `2-SVM over the top layer. In Theorem 4.3, we construct a distribution for which the generalization upper bounds for the `1-SVM on this feature space are smaller than those for the `2-SVM by a Ω( √ d) factor. Our work provides evidence that optimizing all layers of a network can be beneficial for generalization. There have been works that compare `1 and `2-regularized solutions in the context of feature selection and construct a feature space for which a generalization gap exists (e.g., see Ng (2004)). In contrast, we work in the fixed feature space of relu activations, which makes our construction particularly challenging. We will usem to denote the width of the single hidden layer of the network. Following the convention from Section 3, we will use γ?,m to denote the maximum possible normalized margin of a two-layer network with hidden layer size m (note the emphasis on the size of the single hidden layer). The depth K = 2 case of Corollary 3.2 immediately implies that optimizing weakly-regularized `2 loss over width-m two-layer networks gives parameters whose generalization upper bounds depend on the hidden layer size only through 1/γ?,m. Furthermore, from Theorem 3.3 it immediately follows that γ?,1 ≤ γ?,2 ≤ · · · ≤ γ?,∞ The work of Neyshabur et al. (2014) links γ?,m to the `1 SVM over a lifted space. Formally, we define a lifting function ϕ : Rd → L∞(Sd−1) mapping data to an infinite feature vector: x ∈ Rd → ϕ(x) ∈ L∞(Sd−1) satisfying ϕ(x)[ū] = φ(ū>x) (4.1) where φ is the activation of Section 3. We look at the margin of linear functionals corresponding to α ∈ L1(Sd−1) . The 1-norm SVM (Zhu et al., 2004) over the lifted feature ϕ(x) solves for the maximum margin: γ`1 ,max α min i∈[n] yi〈α,ϕ(xi)〉 subject to ‖α‖1 ≤ 1 (4.2) where we rely on the inner product and 1-norm defined in Section 1.2. This formulation is equivalent to a hard-margin optimization on “convex neural networks” (Bengio et al., 2006). Bach (2017) also study optimization and generalization of convex neural networks. Using results from Rosset et al. (2007); Neyshabur et al. (2014); Bengio et al. (2006), our Theorem 2.1 implies that optimizing weaklyregularized logistic loss over two-layer networks is equivalent to solving equation 4.2 when the size of the hidden layer is at least n + 1, where n is the number of training examples. Proposition 4.1 essentially restates this with the minor improvement that this equivalence4 also holds when the size of the hidden layer is n. Proposition 4.1. Let γ`1 be defined in equation 4.2. Then γ`1 2 = γ ?,n = · · · = γ?,∞. For completeness, we prove Proposition 4.1 in Section B, relying on the work of Tibshirani (2013) and Rosset et al. (2004a). Importantly, the `1-max margin on the lifted feature space is obtainable by optimizing a finite neural network. We compare this to the `2 margin attainable via kernel methods. Following the setup of equation 4.2, we define the kernel problem over α ∈ L2(Sd−1): γ`2 ,max α min i∈[n] yi〈α,ϕ(xi)〉 subject to √ κ‖α‖2 ≤ 1 (4.3) where κ , Vol(Sd−1). (We scale ‖α‖2 by √ κ to make the lemma statement below cleaner.) First, γ`2 can be used to obtain a standard upper bound on the generalization error of the kernel SVM. Following the notation of Section 3, we will let L`2-svm denote the 0-1 population classification error for the optimizer of equation 4.3. Lemma 4.2. In the setting of Proposition 3.1, with probability at least 1−δ, the generalization error of the standard kernel SVM with relu feature (defined in equation 4.3) is bounded by L`2-svm . C γ`2 √ dn + `2 (4.4) where `2 , √ log max { log2 C√ dγ`2 ,2 } n + √ log(1/δ) n is typically a lower-order term. The bound above follows from standard techniques (Bartlett & Mendelson, 2002), and we provide a full proof in Section C.2. We construct a data distribution for which this lemma does not give a good bound for kernel methods, but Corollary 3.2 does imply good generalization for two-layer networks. Theorem 4.3. There exists a data distribution pdata such that the `1 SVM with relu features has a good margin: γ`1 & 1 and with probability 1− δ over the choice of i.i.d. samples from pdata, obtains generalization error L`1-svm . √ d log n n + `1 where `1 , √ log(1/δ) n is typically a lower order term. Meanwhile, with high probability the `2 SVM has a small margin: γ`2 . max {√ logn n , 1/d } and therefore the generalization upper bound from 4The factor of 1 2 is due the the relation that every unit-norm parameter Θ corresponds to an α in the lifted space with ‖α‖ = 2. Lemma 4.2 is at least Ω ( min { 1, d √ log n n }) In particular, the `2 bound is larger than the `1 bound by a Ω( √ d) factor. Although Theorem 4.3 compares upper bounds, our construction highlights properties of distributions which result in better neural network generalization than kernel method generalization. Furthermore, in Section 6 we empirically validate the gap in generalization between the two methods. We briefly overview the construction of pdata here. The full proof is in Section D.1. Proof sketch for Theorem 4.3. We base pdata on the distribution D of examples (x, y) described below. Here ei is the i-th standard basis vector and we use x>ei to represent the i-coordinate of x (since the subscript is reserved to index training examples).e > 3 x ... e>d x ∼ N (0, Id−2), and y = +1, x>e1 = +1, x >e2 = +1 w/ prob. 1/4 y = +1, x>e1 = −1, x>e2 = −1 w/ prob. 1/4 y = −1, x>e1 = +1, x>e2 = −1 w/ prob. 1/4 y = −1, x>e1 = −1, x>e2 = +1 w/ prob. 1/4 Figure 1 shows samples from D when there are 3 dimensions. From the visualization, it is clear that there is no linear separator for D. As Lemma D.1 shows, a relu network with four neurons can fit this relatively complicated decision boundary. On the other hand, for kernel methods, we prove that the symmetries in D induce cancellation in feature space. As a result, the features are less predictive of the true label and the margin will therefore be small. We formalize this argument in Section D.1. Gap in regression setting: We are able to prove an even larger Ω( √ n/d) gap between neural networks and kernel methods in the regression setting where we wish to interpolate continuous labels. Analogously to the classification setting, optimizing a regularized squared error loss on neural networks is equivalent to solving a minimum 1-norm regression problem (see Theorem D.5). Furthermore, kernel methods correspond to a minimum 2-norm problem. We construct distributions pdata where the 1-norm solution will have a generalization error bound of O( √ d/n), whereas the 2- norm solution will have a generalization error bound that is Ω(1) and thus vacuous. In Section D.2, we define the 1-norm and 2-norm regression problems. In Theorem D.10 we formalize our construction. 5 PERTURBED WASSERSTEIN GRADIENT FLOW FINDS GLOBAL OPTIMIZERS IN POLYNOMIAL TIME In the prior section, we studied the limiting behavior of the generalization of a two-layer network as its width goes to infinity. In this section, we will now study the limiting behavior of the optimization algorithm, gradient descent. Prior work (Mei et al., 2018; Chizat & Bach, 2018) has shown that as the hidden layer size grows to infinity, gradient descent for a finite neural network approaches the Wasserstein gradient flow over distributions of hidden units (defined in equation 5.1). Chizat & Bach (2018) assume the gradient flow converges, a non-trivial assumption since the space of distributions is infinite-dimensional, and given the assumption prove that Wasserstein gradient flow converges to a global optimizer in this setting, but do not specify a convergence rate. Mei et al. (2018) show global convergence for the infinite-neuron limit of stochastic Langevin dynamics, but also do not provide a convergence rate. We show that a perturbed version of Wasserstein gradient flow converges in polynomial time. The informal take-away of this section is that a perturbed version of gradient descent converges in polynomial time on infinite-size neural networks (for the right notion of infinite-size.) Formally, we optimize the following functional over distributions ρ on Rd+1: L[ρ] , R (∫ Φdρ ) + ∫ V dρ where Φ : Rd+1 → Rk, R : Rk → R, and V : Rd+1 → R. In this work, we consider 2-homogeneous Φ and V . We will additionally require that R is convex and nonnegative and V is positive on the unit sphere. Finally, we need standard regularity assumptions on R,Φ, and V : Assumption 5.1 (Regularity conditions on Φ, R, V ). Φ and V are differentiable as well as upper bounded and Lipschitz on the unit sphere. R is Lipschitz and its Hessian has bounded operator norm. We provide more details on the specific parameters (for boundedness, Lipschitzness, etc.) in Section E.1. We note that relu networks satisfy every condition but differentiability of Φ.5 We can fit a neural network under our framework as follows: Example 5.2 (Logistic loss for neural networks). We interpret ρ as a distribution over the parameters of the network. Let k , n and Φi(θ) , wφ(u>xi) for θ = (w, u). In this case, ∫ Φdρ is a distributional neural network that computes an output for each of the n training examples (like a standard neural network, it also computes a weighted sum over hidden units). We can compute the distributional version of the regularized logistic loss in equation 3.2 by setting V (θ) , λ‖θ‖22 and R(a1, . . . , an) , ∑n i=1 log(1 + exp(−yiai)). We will define L′[ρ] : Rd+1 → R with L′[ρ](θ) , 〈R′( ∫ Φdρ),Φ(θ)〉 + V (θ) and v[ρ](θ) , −∇θL′[ρ](θ). Informally, L′[ρ] is the gradient of L with respect to ρ, and v is the induced velocity field. For the standard Wasserstein gradient flow dynamics, ρt evolves according to d dt ρt = −∇ · (v[ρt]ρt) (5.1) where ∇· denotes the divergence of a vector field. For neural networks, these dynamics formally define continuous-time gradient descent when the hidden layer has infinite size (see Theorem 2.6 of Chizat & Bach (2018), for instance). We propose the following modification of the Wasserstein gradient flow dynamics: d dt ρt = −σρt + σUd −∇ · (v[ρt]ρt) (5.2) where Ud is the uniform distribution on Sd. In our perturbed dynamics, we add very small uniform noise over Ud, which ensures that at all time-steps, there is sufficient mass in a descent direction for the algorithm to decrease the objective. For infinite-size neural networks, one can informally interpret this as re-initializing a very small fraction of the neurons at every step of gradient descent. We prove convergence to a global optimizer in time polynomial in 1/ , d, and the regularity parameters. Theorem 5.3 (Theorem E.4 with regularity parameters omitted). Suppose that Φ and V are 2- homogeneous and the regularity conditions of Assumption 5.1 are satisfied. Also assume that from starting distribution ρ0, a solution to the dynamics in equation 5.2 exists. Define L? , infρ L[ρ]. Let > 0 be a desired error threshold and choose σ , exp(−d log(1/ )poly(k, L[ρ0]− L?)) and t , d 2 4 poly(log(1/ ), k, L[ρ0]− L ?), where the regularity parameters for Φ, V , and R are hidden in the poly(·). Then, perturbed Wasserstein gradient flow converges to an -approximate global minimum in t time: min 0≤t≤t L[ρt]− L? ≤ . We provide a theorem statement that includes regularity parameters in Section E.1. We prove the theorem in Section E.2. As a technical detail, Theorem 5.3 requires that a solution to the dynamics exists. We can remove this assumption by analyzing a discrete-time version of equation 5.2: ρt+1 , ρt + η(−σρt + σUd −∇ · (v[ρt]ρt)) and additionally assuming Φ and V have Lipschitz gradients. In this setting, a polynomial time convergence result also holds. We state the result in Section E.3. An implication of our Theorem 5.3 is that for infinite networks, we can optimize the weaklyregularized logistic loss in time polynomial in the problem parameters and λ−1. By Theorem 2.2, we only require λ−1 = poly(n) to approximate the maximum margin within a constant factor. Thus, for infinite networks, we can approximate the max margin within a constant factor in polynomial time. 5The relu activation is non-differentiable at 0 and hence the gradient flow is not well-defined. Chizat & Bach (2018) acknowledge this same difficulty with relu. 6 SIMULATIONS We first compare the generalization of neural networks and kernel methods for classification and regression. In Figure 2 we plot the generalization error and predicted generalization upper bounds6 of a trained neural network against a `2 kernel method with relu features as we vary n. Our data comes from a synthetic distribution generated by a neural network with 6 hidden units; we provide a detailed setup in Section F.1. For classification we plot 0-1 error, whereas for regression we plot squared error. The variance in the neural network generalization bound for classification likely occured because we did not tune learning rate and training time, so the optimization failed to find the best margin. The plots show that two-layer networks clearly outperform kernel methods in test error as n grows. However, there seems to be looseness in the bounds: the kernel generalization bound appears to stay constant with n (as predicted by our theory for regression), but the test error decreases. We also plot the dependence of the test error and margin on the hidden layer size in Figure 3 for synthetic data generated from a ground truth network with 10 hidden units and also MNIST. The plots indicate that test error is decreasing in hidden layer size while margin is increasing, as Theorem 3.3 predicts. We provide more details on the experimental setup in Section F.2. In Section F.3, we verify the convergence of a simple neural network to the max-margin solution as regularization decreases. In Section F.4, we train modified WideResNet architectures on CIFAR10 and CIFAR100. Although ResNet is not homogeneous, we still report improvements in generalization from annealing the weight decay during training, versus staying at a fixed decay rate. 7 CONCLUSION We have made the case that maximizing margin is one of the inductive biases of relu networks obtained from optimizing weakly-regularized cross-entropy loss. Our framework allows us to directly analyze generalization properties of the network without considering the optimization algorithm used to obtain it. Using this perspective, we provide a simple explanation for why over-parametrization can improve generalization. It is a fascinating question for future work to characterize other generalization properties of the max-margin solution. On the optimization side, we make progress towards understanding over-parametrized gradient descent by analyzing infinite-size neural networks. A natural direction for future work is to apply our theory to optimize the margin of finite-sized neural networks. 6We compute the leading term that is linear in the norm or inverse margin from the bounds in Proposition 3.1 and Lemmas 4.2, D.8, and D.9. A MISSING PROOFS IN SECTION 2 We first show that Lλ does indeed have a global minimizer. Claim A.1. In the setting of Theorems 2.1 and A.3, arg minΘ Lλ(Θ) exists. Proof. We will argue in the setting of Theorem 2.1 where Lλ is the multi-class cross entropy loss, because the logistic loss case is analogous. We first note that Lλ is continuous in Θ because f is continuous in Θ and the term inside the logarithm is always positive. Next, define b , infΘ Lλ(Θ) > 0. Then we note that for ‖Θ‖ > (b/λ)1/r , M , we must have Lλ(Θ) > b. It follows that inf‖Θ‖≤M Lλ(Θ) = infΘ Lλ(Θ). However, there must be a value Θλ which attains inf‖Θ‖≤M Lλ(Θ), because {Θ : ‖Θ‖ ≤ M} is a compact set and Lλ is continuous. Thus, infΘ Lλ(Θ) is attained by some Θλ. A.1 MISSING PROOFS FOR MULTI-CLASS SETTING Towards proving Theorem 2.1, we first show as we decrease λ, the norm of the solution ‖Θλ‖ grows. Lemma A.2. In the setting of Theorem 2.1, as λ→ 0, we have ‖Θλ‖ → ∞. To prove Theorem 2.1, we rely on the exponential scaling of the cross entropy: Lλ can be lower bounded roughly by exp(−‖Θλ‖γλ), but also has an upper bound that scales with exp(−‖Θλ‖γ?). By Lemma A.2, we can take large ‖Θλ‖ so the gap γ?−γλ vanishes. This proof technique is inspired by that of Rosset et al. (2004a). Proof of Theorem 2.1. For any M > 0 and Θ with γΘ , mini ( f(Θ̄;xi)−maxj 6=yi f(Θ̄;xi) ) , Lλ(MΘ) = 1 n n∑ i=1 − log exp(M afyi(Θ;xi))∑l j=1 exp(M afj(Θ;xi)) + λMr‖Θ‖r (by the homogeneity of f ) = 1 n n∑ i=1 − log 1 1 + ∑ j 6=yi exp(M a(fj(Θ;xi)− fyi(Θ;xi))) + λMr‖Θ‖r (A.1) ≤ log(1 + (l − 1) exp(−MaγΘ)) + λMr‖Θ‖r (A.2) We can also apply ∑ j 6=yi exp(M a(fj(Θ;xi) − fyi(Θ;xi))) ≥ max exp(Ma(fj(Θ;xi) − fyi(Θ;xi))) = exp γΘ in order to lower bound equation A.1 and obtain Lλ(MΘ) ≥ 1 n log(1 + exp(−MaγΘ)) + λMr‖Θ‖r (A.3) Applying equation A.2 with M = ‖Θλ‖ and Θ = Θ?, noting that ‖Θ?‖ ≤ 1, we have: Lλ(Θ ?‖Θλ‖) ≤ log(1 + (l − 1) exp(−‖Θλ‖aγ?)) + λ‖Θλ‖r (A.4) Next we lower bound Lλ(Θλ) by applying equation A.3, Lλ(Θλ) ≥ 1 n log(1 + exp(−‖Θλ‖aγλ)) + λ‖Θλ‖r (A.5) Combining equation A.4 and equation A.5 with the fact that Lλ(Θλ) ≤ Lλ(Θ?‖Θλ‖) (by the global optimality of Θλ), we have ∀λ > 0, n log(1 + (l − 1) exp(−‖Θλ‖aγ?)) ≥ log(1 + exp(−‖Θλ‖aγλ)) Recall that by Lemma A.2, as λ → 0, we have ‖Θλ‖ → ∞. Therefore, exp(−‖Θλ‖aγ?), exp(−‖Θλ‖aγλ) → 0. Thus, we can apply Taylor expansion to the equation above with respect to exp(−‖Θλ‖aγ?) and exp(−‖Θλ‖aγλ). If max{exp(−‖Θλ‖aγ?), exp(−‖Θλ‖aγλ)} < 1, then we obtain n(l − 1) exp(−‖Θλ‖aγ?) ≥ exp(−‖Θλ‖aγλ)−O(max{exp(−‖Θλ‖aγ?)2, exp(−‖Θλ‖aγλ)2}) We claim this implies that γ? ≤ lim infλ→0 γλ. If not, we have lim infλ→0 γλ < γ? , which implies that the equation above is violated with sufficiently large ‖Θλ‖ (‖Θλ‖ log(2(`− 1)n)1/a would suffice). By Lemma A.2, ‖Θλ‖ → ∞ as λ→ 0 and therefore we get a contradiction. Finally, we have γλ ≤ γ? by definition of γ?. Hence, limλ→0 γλ exists and equals γ?. Now we fill in the proof of Lemma A.2. Proof of Lemma A.2. For the sake of contradiction, we assume that ∃C > 0 such that for any λ0 > 0, there exists 0 < λ < λ0 with ‖Θλ‖ ≤ C. We will determine the choice of λ0 later and pick λ such that ‖Θλ‖ ≤ C. Then the logits (the prediction fj(Θ, xi) before softmax) are bounded in absolute value by some constant (that depends on C), and therefore the loss function − log exp(fyi (Θ;xi))∑l j=1 exp(fj(Θ;xi)) for every example is bounded from below by some constant D > 0 (depending on C but not λ.) Let M = λ−1/(r+1), we have that 0 < D ≤ Lλ(Θλ) ≤ Lλ(MΘ?) (by the optimality of Θλ) ≤ − log 1 1 + (l − 1) exp(−Maγ?) + λMr (by equation A.2) = log(1 + (l − 1) exp(−λ−a/(r+1)γ?)) + λ1/(r+1) ≤ log(1 + (l − 1) exp(−λ−a/(r+1)0 γ?)) + λ 1/(r+1) 0 Taking a sufficiently small λ0, we obtain a contradiction and complete the proof. A.2 FULL BINARY CLASSIFICATION SETTING For completeness, we state and prove our max-margin results for the setting where we fit binary labels yi ∈ {−1,+1} (as opposed to indices in [l]) and redefining f(Θ; ·) to assign a single real-valued score (as opposed to a score for each label). This lets us work with the simpler λ-regularized logistic loss: Lλ(Θ) , 1 n n∑ i=1 log(1 + exp(−yif(Θ;xi))) + λ‖Θ‖r As before, let Θλ ∈ arg minLλ(Θ), and define the normalized margin γλ by γλ , mini yif(Θ̄λ;xi). Define the maximum possible normalized margin γ? , max ‖Θ‖≤1 min i yif(Θ;xi) (A.6) Theorem A.3. Assume γ? > 0 in the binary classification setting with logistic loss. Then as λ→ 0, γλ → γ?. The proof follows via simple reduction to the multi-class case. Proof of Theorem A.3. We prove this theorem via reduction to the multi-class case with l = 2. Construct f̃ : Rd → R2 with f̃1(Θ;xi) = − 12f(Θ;xi) and f̃2(Θ;xi) = 1 2f(Θ;xi). Define new labels ỹi = 1 if yi = −1 and ỹi = 2 if yi = 1. Now note that f̃ỹi(Θ;xi)−f̃j 6=ỹi(Θ;xi) = yif(Θ;xi), so the multi-class margin for Θ under f̃ is the same as binary margin for Θ under f . Furthermore, defining L̃λ(Θ) , 1 n n∑ i=1 − log exp(f̃ỹi(Θ;xi))∑2 j=1 exp(f̃j(Θ;xi)) + λ‖Θ‖r we get that L̃λ(Θ) = Lλ(Θ), and in particular, L̃λ and Lλ have the same set of minimizers. Therefore we can apply Theorem 2.1 for the multi-class setting and conclude γλ → γ? in the binary classification setting. A.3 MISSING PROOF FOR OPTIMIZATION ACCURACY Proof of Theorem 2.2. Choose B , ( 1 γ? log (l−1)(γ?)r/a λ )1/a . We can upper bound Lλ(Θ′) by computing Lλ(Θ ′) ≤ βLλ(Θλ) ≤ βLλ(BΘ?) ≤ β log(1 + (l − 1) exp(−Baγ?)) + βλBr (by equation A.2) ≤ β(l − 1) exp(−Baγ?) + βλBr (using log(1 + x) ≤ x) ≤ β λ (γ?)r/a + βλ ( 1 γ? log (l − 1)(γ?)r/a λ )r/a ≤ β λ (γ?)r/a ( 1 + ( log (l − 1)(γ?)r/a λ )r/a) , L(UB) Furthermore, it holds that ‖Θ′‖r ≤ L (UB) λ . Now we note that Lλ(Θ ′) ≤ L(UB) ≤ 2β λ (γ?)r/a ( log (l − 1)(γ?)r/a λ )r/a ≤ 1 2n for sufficiently large c depending only on a/r. Now using the fact that log(x) ≥ x1+x ∀x ≥ −1, we additionally have the lower bound Lλ(Θ′) ≥ 1n log(1 + exp(−γ ′‖Θ′‖a)) ≥ 1n exp(−γ′‖Θ′‖a) 1+exp(−γ′‖Θ′‖a) . Since L(UB) ≤ 1, we can rearrange to get γ′ ≥ − log nLλ(Θ ′) 1−nLλ(Θ′) ‖Θ′‖a ≥ − log nL (UB) 1−nL(UB) ‖Θ′‖a ≥ − log(2nL (UB)) ‖Θ′‖a The middle inequality followed because x1−x is increasing in x for 0 ≤ x < 1, and the last because L(UB) ≤ 12n . Since − log 2nL (UB) > 0 we can also apply the bound ‖Θ′‖r ≤ L (UB) λ to get γ′ ≥ −λ a/r log 2nL(UB) (L(UB))a/r = − log ( 2nβ λ (γ?)r/a ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)) βa/r γ? ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r (by definition of L(UB)) ≥ γ ? βa/r log( (γ ?)r/a 2βnλ )( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r ︸ ︷︷ ︸ ♣ − log ( 1 + ( log (l−1)(γ ?)r/a λ )r/a) ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r ︸ ︷︷ ︸ ♥ We will first bound ♣. First note that log( (γ ?)r/a 2βnλ ) log (l−1)(γ ?)r/a λ = log (γ ?)r/a λ − log 2βn log (γ ?)r/a λ + log(l − 1) ≥ log (γ ?)r/a λ − log 2βn(l − 1) log (γ ?)r/a λ ≥ c− 3 c (A.7) where the last inequality follows from the fact that (γ ?)r/a λ ≥ n c(l − 1)c and β ≤ 2. Next, using the fact that log (γ ?)r/a λ ≥ 1 (2r/a−1)a/r , we note that( 1 + ( log (l − 1)(γ?)r/a λ )−r/a)a/r ≤ ( 1 + ( 1 (2r/a − 1)a/r )−r/a)a/r ≤ 2 (A.8) Combining equation A.7 and equation A.8, we can conclude that ♣ = log( (γ ?)r/a 2βnλ ) log (l−1)(γ ?)r/a λ ( 1 + ( log (l − 1)(γ?)r/a λ )−r/a)−a/r ≥ c− 3 2c Finally, we note that if 1 + ( log (l−1)(γ ?)r/a λ )r/a is a sufficiently large constant that depends only on a/r (which can be achieved by choosing c sufficiently large), it will follow that ♥ ≤ 110 . Thus, if c ≥ 5, we can combine our bounds on ♣ and ♥ to get that γ′ ≥ γ ? 10βa/r B MISSING PROOF OF PROPOSITION 4.1 Proposition 4.1 follows simply from applying Corollary 1 of Neyshabur et al. (2014) to a hard-margin SVM problem. For completeness, we provide another proof here. The proof of Proposition 4.1 will consist of two steps: first, show that equation 4.2 has an optimal solution with sparsity n, and second, show that sparse solutions to equation 4.2 can be mapped to a neural network with the same margin, and vice versa. The following lemma and proof are based on Lemma 14 of Tibshirani (2013). Lemma B.1. Let supp(α) , {ū : |α(ū)| > 0}. There exists an optimal solution α? to equation 4.2 with |supp(α?)| ≤ n. For the proof of this lemma, we find it convenient to work with a minimum norm formulation which we show is equivalent to equation 4.2: min α ‖α‖1 subject to yi〈α,ϕ(xi)〉 ≥ 1 ∀i (B.1) Claim B.2. Let S ⊂ L1(Sd−1) be the set of optimizers for equation 4.2, and let S′ ⊂ L1(Sd−1) be the set of optimizers for equation B.1. If equation B.1 is feasible, for any α ∈ S, αγ`1 ∈ S ′, and for any α′ ∈ S′, α ′ ‖α′‖1 ∈ S. Proof. Let opt′ denote the optimal objective for equation B.1. We note that α ′ ‖α′‖1 is feasible for equation 4.2 with objective 1opt′ , and therefore γ`1 ≥ 1 opt′ . Furthermore, 1 2γ`1 yi ∫ ū∈Sd−1 α(ū)φ(ū >xi)dū ≥ 1 ∀i, and so αγ`1 is feasible for equation B.1 with objective 1 γ`1 . Therefore, opt′ ≤ 1γ`1 . As a result, it must hold that opt ′ = 1γ`1 , which means that α ′ ‖α′‖1 is optimal for equation 4.2, and αγ`1 is optimal for equation B.1, as desired. First, note that if equation B.1 is not feasible, then γ`1 = 0 and equation 4.2 has a trivial sparse solution, the all zeros function. Thus, it suffices to show that an optimal solution to equation B.1 exists that is n-sparse, since by Lemma B.2 equation B.1 and equation 4.2 have equivalent solutions up to a scaling. We begin by taking the dual of equation B.1. Claim B.3. The dual of equation B.1 has form max λ∈Rn λ>~1 subject to ∣∣∣∣∣ n∑ i=1 λiyiφ(ū >xi) ∣∣∣∣∣ ≤ 1 ∀ū ∈ Sd−1 λi ≥ 0 For any primal optimal solution α? and dual optimal solution λ?, it must hold that n∑ i=1 λ?i yiφ(ū >xi) = sign(α ?(ū)) ⇐⇒ α?(ū) 6= 0 (B.2) Proof. The dual form can be solved for by computation. By strong duality, equation B.2 must follow from the KKT conditions. Now define the mapping v : Sd−1 → Rn with vi(ū) , yiφ(ū>xi). We will show a general result about linearly dependent v(ū) for ū ∈ supp(α?), after which we can reduce directly to the proof of Tibshirani (2013). Claim B.4. Let α? be any optimal solution. Suppose that there exists S ⊆ supp(α?) such that {v(ū) : ū ∈ S} forms a linearly dependent set, i.e.∑ ū∈S cūv(ū) = ~0 (B.3) for coefficients c. Then ∑ ū∈S cū sign(α ?(ū)) = 0. Proof. Let λ? be any dual optimal solution, then λ?>v(ū) = sign(α?(ū)) ∀ū ∈ supp(α?) by Claim B.3. Thus, we apply λ?> to both sides of equation B.3 to get the desired statement. Proof of Lemma B.1. The rest of the proof follows Lemma 14 in Tibshirani (2013). The lemma argues that if the conclusion of Claim B.4 holds and an optimal solution α? has S ⊆ supp(α?) with {v(ū) : ū ∈ S} linearly dependent, we can construct a new α′ with ‖α′‖1 = ‖α?‖1 and supp(α′) ⊂ supp(α?) (where the inclusion is strict). Thus, if we consider an optimal α? with minimal support, it must follow that {v(ū) : ū ∈ supp(α?)} is a linearly independent set, and therefore |supp(α?)| ≤ n. We can now complete the proof of Proposition 4.1. Proof of Proposition 4.1. For ease of notation, we will parametrize a two-layer network with m units by top layer weights w1, . . . , wm ∈ R and bottom layer weights u1, . . . , um ∈ Rd. As before, we use Θ to refer to the collection of parameters, so the network computes the real-valued function f(Θ;x) = m∑ j=1 wjφ(u > j x) Note that we simply renamed the variables from the parametrization of equation 3.1. We first apply Lemma B.1 to conclude that equation 4.2 admits a n-sparse optimal solution α?. Because of sparsity, we can now abuse notation and treat α? as a real-valued function such that∑ ū∈supp(α?) |α?(ū)| ≤ 1. We construct Θ corresponding to a two-layer network with m ≥ n hidden units and normalized margin at least γ`12 . For clarity, we let W correspond to the top layer weights and U correspond to the bottom layer weights. For every ū ∈ supp(α), we let Θ have a corresponding hidden unit j with (wj , uj) = ( sign(α?(ū)) √ |α?(ū)| 2 , √ |α?(ū)| 2 ū ) , and set the remaining hidden units to ~0. This is possible because m ≥ n. Now f(Θ;x) = m∑ j=1 wjφ(u > j x) = 1 2 ∑ ū∈supp(α?) α?(ū)φ(ū>x) Furthermore, ‖Θ‖22 = m∑ j=1 w2j + ‖uj‖22 = ∑ ū∈supp(α) |α?(ū)| 2 + |α?(ū)| 2 ‖ū‖22 = ∑ ū∈supp(α) |α?(ū)| ≤ 1 Thus it follows that Θ has normalized margin at least γ`1/2, so γ ?,m ≥ γ`1/2. To conclude, we show that γ?,m ≤ γ`1/2. Let Θ?,m denote the parameters obtaining optimal m-unit margin γ?,m with hidden units (w?,mj , u ?,m j ) for j ∈ [m]. We can construct α to put a scaled delta mass of 2w?,mj ‖u ?,m j ‖2 on ū ?,m j for j ∈ [m]. It follows that ‖α‖1 = m∑ j=1 2|w?,mj |‖u ?,m j ‖2 ≤ m∑ j=1 w?,mj 2 + ‖u?,mj ‖ 2 2 = ‖Θ?,m‖22 ≤ 1 Furthermore, ∫ Sd−1 α(ū)φ(ū>x) = 2 m∑ j=1 w?,mj ‖u ?,m j ‖2φ((ū ?,m j ) >x) = 2 m∑ j=1 w?,mj φ(u ?,m j > x) = 2f(Θ?,m;x) Thus, α is a feasible solution to equation 4.2 with objective value at least 2γ?,m. Therefore, γ`1 ≥ 2γ?,m, so γ?,m = γ`1/2. C RADEMACHER COMPLEXITY AND GENERALIZATION ERROR We prove the generalization error bounds stated in Proposition 3.1 and Lemma 4.2 via Rademacher complexity and margin theory. Assume that our data X,Y are drawn i.i.d. from ground truth distribution pdata supported on X × Y . For some hypothesis classF of real-valued functions, we define the empirical Rademacher complexity R̂(F) as follows: R̂(F) , 1 n E i [ sup f∈F n∑ i=1 if(xi) ] where i are independent Rademacher random variables. For a classifier f , following the notation of Section 3 we will use L(f) , Pr(x,y)∼pdata(yf(x) ≤ 0) to denote the population 0-1 loss of the classifier f . The following classical theorem (Koltchinskii et al., 2002), (Kakade et al., 2009) bounds generalization error in terms of the Rademacher complexity and margin loss. Theorem C.1 (Theorem 2 of Kakade et al. (2009)). Let (xi, yi)ni=1 be drawn iid from pdata. We work in the binary classification setting, so Y = {−1, 1}. Assume that for all f ∈ F , we have supx∈X f(x) ≤ C. Then with probability at least 1− δ over the random draws of the data, for every γ > 0 and f ∈ F , L(f) ≤ 1 n n∑ i=1 1(yif(xi) < γ) + 4R̂(F) γ + √ log log2 4C γ n + √ log(1/δ) 2n C.1 PROOF OF PROPOSITION 3.1 We will prove Proposition 3.1 by applying the Rademacher complexity bounds of Golowich et al. (2017) with Theorem C.1. First, we show the following lemma bounding the generalization of neural networks whose weight matrices have bounded Frobenius norms. Lemma C.2. Define the hypothesis class FK over depth-K neural networks by FK = { f(Θ; ·) : ‖Wj‖F ≤ 1√ K ∀j } Let C , supx∈X ‖x‖2. Recall that L(Θ) denotes the 0-1 population loss L(f(Θ; ·)). Then for any f(Θ; ·) ∈ FK classifying the training data correctly with unnormalized margin γΘ , mini yif(Θ;xi) > 0, with probability at least 1− δ, L(Θ) . C γΘK(K−1)/2 √ n + √ log log2 4C γΘ n + √ log(1/δ) n (C.1) Note the dependence on the unnormalized margin rather than the normalized margin. Proof. We first claim that supf(Θ;·)∈FK supx∈X f(Θ;x) ≤ C. To see this, for any f(Θ; ·) ∈ FK , f(Θ;x) = WKφ(· · ·φ(W1x) · · · ) ≤ ‖WK‖F ‖φ(WK−1φ(· · ·φ(W1x) · · · )‖2 ≤ ‖WK‖F ‖WK−1φ(· · ·φ(W1x) · · · )‖2 (since φ is 1-Lipschitz and φ(0) = 0, so φ performs a contraction) < ‖x‖2 ≤ C (repeatedly applying this argument and using ‖Wj‖F < 1) Furthermore, by Theorem 1 of Golowich et al. (2017), R̂(FK) has upper bound R̂(FK) . C K(K−1)/2 √ n Thus, we can apply Theorem C.1 to conclude that for all f(Θ; ·) ∈ FK and all γ > 0, with probability 1− δ, L(Θ) . 1 n n∑ i=1 1(yif(Θ;xi) < γ) + C γK(K−1)/2 √ n + √ log log2 4C γ n + √ log(1/δ) n In particular, by definition choosing γ = γΘ makes the first term on the LHS vanish and gives the statement of the lemma. Proof of Proposition 3.1. Given parameters Θ = (W1, . . . ,WK), we first construct parameters Θ̃ = (W̃1, . . . , W̃K) such that f(Θ̄; ·) and f(Θ̃; ·) compute the same function, and ‖W̃1‖2F = ‖W̃2‖2F = · · · = ‖W̃K‖2F ≤ 1K . To do this, we set W̃j = ( ∏K k=1 ‖Wk‖F )1/k ‖Wj‖F ‖Θ‖F Wj By construction ‖W̃j‖2F = ( ∏K k=1 ‖Wk‖2F )1/k ‖Θ‖2F = ( ∏K k=1 ‖Wk‖2F )1/k∑K k=1 ‖Wk‖2F ≤ 1 k (by the AM-GM inequality) Furthermore, we also have f(Θ̃;x) = W̃Kφ(· · ·φ(W̃1x) · · · ) = K∏ j=1 ( ∏K k=1 ‖Wk‖F )1/k ‖Wj‖F ‖Θ‖F WKφ(· · ·φ(W1x) · · · ) (by the homogeneity of φ) = 1 ‖Θ‖KF f(Θ;x) = f ( Θ ‖Θ‖F ;x ) (since f is K-homogeneous in Θ) = f(Θ̄;x) Now we note that by construction, L(Θ) = L(Θ̃). Now f(Θ̃; ·) must also classify the training data perfectly, has unnormalized margin γ, and furthermore f(Θ̃; ·) ∈ FK . As a result, Lemma C.2 allows us to conclude the desired statement. To conclude Corollary 3.2, we apply the above on Θλ,M and use Theorem A.3. C.2 PROOF OF KERNEL GENERALIZATION BOUNDS Let F2,φB denote the class of `2-bounded linear functionals in lifted feature space: F 2,φ B , {x 7→ 〈α,ϕ(x)〉 : α ∈ L2(Sd−1), ‖α‖2 ≤ B}. We abuse notation and write α ∈ F2,φB to indicate a linear functional from F2,φB . As before, we will use L(α) to indicate the 0-1 population loss of the classifier x 7→ 〈α,ϕ(x)〉 and let C , supx∈X ‖x‖2 be an upper bound on the norm of the data. We focus on analyzing the Rademacher complexity R̂(F2,φB ), mirroring derivations done in the past (Bartlett & Mendelson, 2002). We include our derivations here for completeness. Lemma C.3. R̂(F2,φB ) ≤ 1 nB √∑n i=1 ‖ϕ(xi)‖22. Proof. We write R̂(F2,φB ) = 1 n E i [ sup α∈F2,φB 〈α, n∑ i=1 iϕ(xi)〉 ] ≤ 1 n E i [ sup α∈F2,φB ‖α‖2 ∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 ] ≤ 1 n B · E i [∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 ] ≤ 1 n B √√√√√E i ∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 2 (via Jensen’s inequality) ≤ 1 n B √√√√√E i n∑ i=1 n∑ j=1 i j〈ϕ(xi), ϕ(xi)〉 ≤ 1 n B √√√√ n∑ i=1 ‖ϕ(xi)‖22 (terms where i 6= j cancel out) As an example, we can apply this bound to relu features: Corollary C.4. Suppose that φ is the relu activation. Let κ , Vol(Sd−1). Then R̂(F2,φB ) . B‖X‖F √ κ n √ d ≤ BC √ κ√ dn . Proof. We first show that ‖ϕ(xi)‖22 = Θ ( κ d‖xi‖ 2 2 ) . We can compute ‖ϕ(xi)‖22 = Vol(Sd−1)Eū∼Sd−1 [relu(ū>xi)2] = κ d Eū∼Sd−1 [relu( √ dū>xi) 2] = κ d 1 M2 Eu∼N (0,Id×d)[relu(u Txi) 2] (M2 is the second moment of N (0, 1)) = Θ (κ d ‖xi‖22 ) (C.2) where the last line uses the computation provided in Lemma A.1 by Du et al. (2017). Now we plug this into Lemma C.3 to get the desired bound. We will now prove Lemma 4.2. Proof of Lemma 4.2. From equation C.2, we first obtain supx∈X ‖ϕ(x)‖2 . C √ κ d . Denote the optimizer for equation 4.3 by α`2 . Note that √ κα`2 ∈ F 2,φ 1 , and furthermore L(α`2) = L( √ κα`2). Since √ κα`2 has unnormalized margin √ κγ`2 , we apply Theorem C.1 on margin √ κγ`2 and hypothesis class F2,φ1 to get with probability 1− δ, L`2-svm = L( √ κα`2) ≤ 4R̂(F2,φ1 )√ κγ`2 + √ log log2 4 supx∈X ‖ϕ(x)‖2√ κγ`2 n + √ log(1/δ) 2n . C γ`2 √ dn + √√√√ log max{log2 C√dγ`2 , 2} n + √ log(1/δ) n (applying Corollary C.4) D MISSING PROOFS FOR COMPARISON TO KERNEL METHODS D.1 CLASSIFICATION In this section we will complete a proof of Theorem 4.3. Recall the construction of the distribution D provided in Section 4. We first provide a classifier of this data with small `1 norm. Lemma D.1. In the setting of Theorem 4.3, we have that γ`1 ≥ √ 2 4 . Proof. Consider the network f(x) = 14 ( (x>(e1 +e2)/ √ 2)+ +(x >(−e1−e2)/ √ 2)+− (x>(−e1 + e2)/ √ 2)+ − (x>(e1 − e2)/ √ 2)+ ) . The attained margin γ = √ 2 4 , so γ`1 ≥ √ 2 4 . Now we will upper bound the margin attainable by the `2 SVM. Lemma D.2 (Margin upper bound tool). In the setting of Theorem 4.3, we have γ`2 ≤ 1√ κ · ∥∥∥∥∥ 1n n∑ i=1 ϕ(xi)yi ∥∥∥∥∥ 2 Proof. By the definition of γ`2 , we have that for any α with √ κ‖α‖2 ≤ 1, we have γ`2 ≤ max√ κ‖α‖2≤1 1 n n∑ i=1 〈α, yiϕ(xi)〉 Setting α = 1√ κ 1 n ∑n i=1 ϕ(xi)yi/‖ 1 n ∑n i=1 ϕ(xi)yi‖2 completes the proof. (Attentive readers may realize that this is equivalent to setting the dual variable of the convex program 4.3 to all 1’s function.) Lemma D.3. In the setting of Theorem 4.3, let (xi, yi)ni=1 be n i.i.d samples and corresponding labels from D. Let ϕ be defined in equation 4.1 with φ = relu. With high probability (at least 1− dn−10), we have ∥∥∥∥∥ 1n n∑ i=1 ϕ(xi)yi ∥∥∥∥∥ 2 . √ κ/n log n+ √ κ/d Proof. Let Wi = ϕ(xi)yi. We will bound several quantities regarding Wi’s. In the rest of the proof, we will condition on the event E that ∀i, ‖xi‖22 . d log n. Note that E is a high probability event and conditioned on E, xi’s are still independent. We omit the condition on E in the rest of the proof for simplicity. We first show that assuming the following three inequalities that the conclusion of the Lemma follows. 1. ∀i, ‖Wi‖22 . κ log n . 2. σ2 , Var[ ∑ iWi] , ∑n i=1 E[‖Wi − EWi‖22] . nκ log n 3. ‖E [ ∑ Wi] ‖2 . √ κn/d. By bullets 1, 2, and Bernstein inequality, we have that with probability at least 1− dn−10 over the randomness of the data (X,Y ),∥∥∥∥∥ n∑ i=1 Wi − E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 . √ κ log1.5 n+ √ nκ log2 n . √ nκ log2 n By bullet 3 and equation above, we complete the proof with triangle inequality:∥∥∥∥∥ n∑ i=1 Wi ∥∥∥∥∥ 2 ≤ ∥∥∥∥∥E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 + √ nκ log2 n . √ nκ log2 n+ √ κn/d Therefore, it suffices to prove bullets 1, 2 and 3. Note that 2 is a direct corollary of 1 so we will only prove 1 and 3. We start with 3: By the definition of the `2 norm in L2(Sd−1) and the independence of (xi, yi)’s, we can rewrite∥∥∥∥∥E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 2 = κ · n2 E ū∼Sd−1 [ E (x,y)∼D ϕ(x)[ū] · y ]2 (D.1) Let ū = (ū1, . . . , ūd) and ū−2 = (ū3, . . . , ūd) ∈ Rd−2, and define τ
1. What are the main contributions of the paper regarding the normalized margin and its connection to the max margin under l_2 norm constraint on weights? 2. How does the proposed approach compare to other methods in terms of its ability to handle non-asymptotic convergence rates and generalization bounds? 3. What are some potential limitations or areas for improvement in the presented theory and its connections to prior works? 4. Can the authors provide more context or comparisons with existing literature regarding their results, particularly for Theorem 4.3 and its relation to [Mei Montanari and Nguyen 2018]? 5. How might the "sigma" term help achieve positive results in the perturbed Wasserstein flow, and could this be further emphasized or elaborated upon?
Review
Review The authors claim to prove three things: (1) Under logistic loss (with a vanishing regularization), the normalized margin (of the solution) converges to the max normalized margin, for positive homogenous functions. This is an asymptotic result: the amount of regularization vanishes. (2) For one hidden layer NN, the max margin under l_2 norm constraint on weights in the limit, is equivalent to the l_1 constraint (total variation) on the sign measure (specified by infinite neurons) for the one hidden layer NN. (3) Show some convergence rate for the mean-field view of one hidden layer NN, i.e., the Wasserstein gradient flow on the measure (of the neurons). The author show some positive result for a perturbed version. The problem is certainly interesting. However, my main concerns are: (1) the novelty of the main theorems given the literature, and (2) the carefulness of stating what is known in the literature review. In summary: 1. Theorem 2.1, Theorem 3.1, and Theorem 3.3 are anticipated, or not as critical, given the literature (detailed reasons in major comments). 2. The construction in Theorem 3.5 is nice, but, it is only able to say an upper bound of the generalization of kernel is not good (comparing upper bounds is not enough). In addition, For Theorem 4.3. [Mei Montanari and Nguyen 2018] also considers similar perturbed Wasserstein gradient flow, with many convergence results. One needs to be more careful in stating what is new. Major comments: 1. Theorem 3.3 (and Theorem 3.2) seems to be the most interesting/innovative one. However, I would like to argue that it might be natural in one line proof, with the following alternative view: -- l_2 norm constraint normalized margin, one hidden layer NN, with infinite neurons gamma^star, infty := \max \min_i y_i int_{neuron} w || u || ReLU( x_i \bar{u}) dS^{d-1} -- integral over normalized neurons over sphere under the constraint int_{neuron} (w^2 + ||u||^2) dS^{d-1} \leq 1 This is equivalent to the l_1 constraint margin (variation norm), one hidden layer NN, gamma_l_1 := \max \min_i y_i int_{neuron} rho(u) ReLU( x_i \bar{u}) dS^{d-1} -- integral over normalized neurons over sphere under the constraint int_{neuron} |rho(u)| dS^{d-1} \leq 1/2 here rho(u) is the sign measure represented by neurons. Simply because at the optimum w || u || = 1/2 ( w^2 + || u ||^2) := rho(u) therefore gamma^star, infty = gamma_l_1 So one see the factor 1/2 exactly. -- In addition, [Bach 18, JMLR:v18:14-546] discuss more in depth the l_1 type constraint (TV of sign measure) rather then l_2 type constraint (RKHS) for one hidden layer NN with infinite neurons. The authors should cite this work. It is clear that l_1(neuron) < l_2(neuron) therefore l_2 constraint margin is always smaller than l_1 constraint margin. 2. Theorem 2.1. I think the proof is almost a standard exercise given [Rosset, Zhu, and Hastie 04]. The observation for it generalizes to positive homogenous function beyond linear is a nice addition, but not crucial enough to stand out as an innovation. Much of the difficulty in related paper lies in achieving non asymptotic convergence rate to max margin solution, for logistic loss [Soudry, Hoffer and Srebro 18], or what happens when data is not perfectly separable [Ji and Telgarsky 18]. 3. Generalization result Theorem 3.1. Maybe it is better to state as a corollary, given the known results in the literature, in my opinion. This generalization is standard result from margin-based bounds available [Koltchinskii and Panchenko 02, Bartlett and Mendelson 02]. In addition, the authors remark that the limit for (3.3) may not exist. You can change to limsup, your footnote[4] is essentially the limsup definition. 4. Theorem 3.5. This construction of the data distribution is the part I like. However, you should remind the reader that having a small margin for the kernel only implies the the upper bound for generalization is bad. Comparing the upper bound doesn't mean kernel method is performing bad for the instance. From a logic view, it is unclear the benefit of Theorem 3.5. I do agree one can try to see in simulation if kernel/RKHS approach (l_2) is performing worse for generalization, for one hidden layer NN. But this is separate from the theory. 5. Theorem 4.3. This result should be put in the context of the literature. Specifically [Mei Montanari and Nguyen 2018], Eqn 11-12. The perturbed wasserstein flow the authors considered looks very close to [Mei Montanari and Nguyen 2018], Eqn 11-12, admittedly with the logistic loss instead of the square loss. Right now, as stated in the current paper, it is very hard for the general audience to understand the contribution. A better job in comparing the literature will help. For the technical crowd, maybe emphasize on why the "simga" can help you achieve a positive result. Minor Comments: 6. One additional suggestion: seems to me Section 4 is a bit away from the central topic of the current paper. I can understand that the optimization/convergence result will help complete the whole picture. However, to contribute to the "margin theme", it would be better to state with the "small vanishing regularization", how it affects the convergence of Theorem 4.3. Even with this, it is unclear as one don't know how to connect different part of the paper: with what choice of vanishing regularization will generate a solution with a good margin, using the Wasserstein gradient flow.
ICLR
Title On the Margin Theory of Feedforward Neural Networks Abstract Past works have shown that, somewhat surprisingly, over-parametrization can help generalization in neural networks. Towards explaining this phenomenon, we adopt a margin-based perspective. We establish: 1) for multi-layer feedforward relu networks, the global minimizer of a weakly-regularized cross-entropy loss has the maximum normalized margin among all networks, 2) as a result, increasing the over-parametrization improves the normalized margin and generalization error bounds for deep networks. In the case of two-layer networks, an infinite-width neural network enjoys the best generalization guarantees. The typical infinite feature methods are kernel methods; we compare the neural net margin with that of kernel methods and construct natural instances where kernel methods have much weaker generalization guarantees. We validate this gap between the two approaches empirically. Finally, this infinite-neuron viewpoint is also fruitful for analyzing optimization. We show that a perturbed gradient flow on infinite-size networks finds a global optimizer in polynomial time. 1 INTRODUCTION In deep learning, over-parametrization refers to the widely-adopted technique of using more parameters than necessary (Krizhevsky et al., 2012; Livni et al., 2014). Both computationally and statistically, over-parametrization is crucial for learning neural nets. Controlled experiments demonstrate that over-parametrization eases optimization by smoothing the non-convex loss surface (Livni et al., 2014; Sagun et al., 2017). Statistically, increasing model size without any regularization still improves generalization even after the model interpolates the data perfectly (Neyshabur et al., 2017b). This is surprising given the conventional wisdom on the trade-off between model capacity and generalization. In the absence of an explicit regularizer, algorithmic regularization is likely the key contributor to good generalization. Recent works have shown that gradient descent finds the minimum norm solution fitting the data for problems including logistic regression, linearized neural networks, and matrix factorization (Soudry et al., 2018; Gunasekar et al., 2018b; Li et al., 2018; Gunasekar et al., 2018a; Ji & Telgarsky, 2018). Many of these proofs require a delicate analysis of the algorithm’s dynamics, and some are not fully rigorous due to assumptions on the iterates. To the best of our knowledge, it is an open question to prove analogous results for even two-layer relu networks. (For example, the technique of Li et al. (2018) on two-layer neural nets with quadratic activations still falls within the realm of linear algebraic tools, which apparently do not suffice for other activations.) We propose a different route towards understanding generalization: making the regularization explicit. The motivations are: 1) with an explicit regularizer, we can analyze generalization without fully understanding optimization; 2) it is unknown whether gradient descent provides additional implicit regularization beyond what `2 regularization already offers; 3) on the other hand, with a sufficiently weak `2 regularizer, we can prove stronger results that apply to multi-layer relu networks. Additionally, explicit regularization is perhaps more relevant because `2 regularization is typically used in practice. Concretely, we add a norm-based regularizer to the cross entropy loss of a multi-layer feedforward neural network with relu activations. We show that the global minimizer of the regularized objective achieves the maximum normalized margin among all the models with the same architecture, if the regularizer is sufficiently weak (Theorem 2.1). Informally, for models with norm 1 that perfectly classify the data, the margin is the smallest difference across all datapoints between the classifier score for the true label and the next best score. We are interested in normalized margin because its inverse bounds the generalization error (see recent work (Bartlett et al., 2017; Neyshabur et al., 2017a; 2018; Golowich et al., 2017) or Proposition 3.1). Our work explains why optimizing the training loss can lead to parameters with a large margin and thus, better generalization error (see Corollary 3.2). We further note that the maximum possible margin is non-decreasing in the width of the architecture, and therefore the generalization bound of Corollary 3.2 can only improve as the size of the network grows (see Theorem 3.3). Thus, even if the dataset is already separable, it could still be useful to increase the width to achieve larger margin and better generalization. At a first glance, it might seem counterintuitive that decreasing the regularizer is the right approach. At a high level, we show that the regularizer only serves as a tiebreaker to steer the model towards choosing the largest normalized margin. Our proofs are simple, oblivious to the optimization procedure, and apply to any norm-based regularizer. We also show that an exact global minimum is unnecessary: if we approximate the minimum loss within a constant factor, we obtain the max-margin within a constant factor (Theorem 2.2). To better understand the neural network max-margin, in Section 4 we compare the max-margin two-layer network obtained by optimizing both layers jointly to kernel methods corresponding to fixing random weights for the hidden layer and solving a 2-norm max-margin on the top layer. We design a simple data distribution (Figure 1) where neural net margin is large but the kernel margin is small. This translates to an Ω( √ d) factor gap between the generalization error bounds for the two approaches and demonstrates the power of neural nets compared to kernel methods. We experimentally confirm that a gap does indeed exist. In the setting of two-layer networks, we also study how over-parametrization helps optimization. Prior works (Mei et al., 2018; Chizat & Bach, 2018; Sirignano & Spiliopoulos, 2018; Rotskoff & Vanden-Eijnden, 2018) show that gradient descent on two-layer networks becomes Wasserstein gradient flow over parameter distributions in the limit of infinite neurons. For this setting, we prove that perturbed Wasserstein gradient flow finds a global optimizer in polynomial time. Finally, we empirically validate several claims made in this paper. First, we confirm that neural networks do generalize better than kernel methods. Second, we show that for two-layer networks, the test error decreases and margin increases as the hidden layer grows, as predicted by our theory. 1.1 ADDITIONAL RELATED WORK Zhang et al. (2016) and Neyshabur et al. (2017b) show that neural network generalization defies conventional explanations and requires new ones. Neyshabur et al. (2014) initiate the search for the “inductive bias” of neural networks towards solutions with good generalization. Recent papers (Hardt et al., 2015; Brutzkus et al., 2017; Chaudhari et al., 2016) study inductive bias through training time and sharpness of local minima. Neyshabur et al. (2015a) propose a new steepest descent algorithm in a geometry invariant to weight rescaling and show that this improves generalization. Morcos et al. (2018) relate generalization in deep nets to the number of “directions” in the neurons. Other papers (Gunasekar et al., 2017; Soudry et al., 2018; Nacson et al., 2018; Gunasekar et al., 2018b; Li et al., 2018; Gunasekar et al., 2018a) study implicit regularization towards a specific solution. Ma et al. (2017) show that implicit regularization can help gradient descent avoid overshooting optima. Rosset et al. (2004a;b) study logistic regression with a weak regularization and show convergence to the max margin solution. We adopt their techniques and extend their results. A line of work initiated by Neyshabur et al. (2015b) has focused on deriving tighter norm-based Rademacher complexity bounds for deep neural networks (Bartlett et al., 2017; Neyshabur et al., 2017a; Golowich et al., 2017) and new compression based generalization properties (Arora et al., 2018b). Dziugaite & Roy (2017) manage to compute non-vacuous generalization bounds from PAC-Bayes bounds. Neyshabur et al. (2018) investigate the Rademacher complexity of two-layer networks and propose a bound that is decreasing with the distance to initialization. Liang & Rakhlin (2018) and Belkin et al. (2018) study the generalization of kernel methods. On the optimization side, Soudry & Carmon (2016) explain why over-parametrization can remove bad local minima. Safran & Shamir (2016) show that over-parametrization can improve the quality of the random initialization. Haeffele & Vidal (2015), Nguyen & Hein (2017), and Venturi et al. (2018) show that for sufficiently overparametrized networks, all local minima are global, but do not show how to find these minima via gradient descent. Du & Lee (2018) show that for two-layer networks with quadratic activations, all second-order stationary points are global minimizers. Arora et al. (2018a) interpret over-parametrization as a means of implicit acceleration during optimization. Mei et al. (2018), Chizat & Bach (2018), and Sirignano & Spiliopoulos (2018) take a distributional view of over-parametrized networks. Chizat & Bach (2018) show that Wasserstein gradient flow converges to global optimizers under structural assumptions. We extend this to a polynomial-time result. 1.2 NOTATION Let R denote the set of real numbers. We will use ‖·‖ to indicate a general norm, with ‖·‖1, ‖·‖2, ‖·‖∞ denoting the `1, `2, `∞ norms on finite dimensional vectors, respectively, and ‖ · ‖F denoting the Frobenius norm on a matrix. In general, we use ¯ on top of a symbol to denote a unit vector: when applicable, ū , u/‖u‖, where the norm ‖ · ‖ will be clear from context. Let Sd−1 , {ū ∈ Rd : ‖ū‖2 = 1} be the unit sphere in d dimensions. Let Lp(Sd−1) be the space of functions on Sd−1 for which the p-th power of the absolute value is Lebesgue integrable. For α ∈ Lp(Sd−1), we overload notation and write ‖α‖p , (∫ Sd−1 |α(ū)| pdū )1/p . Additionally, for α1 ∈ L1(Sd−1) and α2 ∈ L∞(Sd−1) or α1, α2 ∈ L2(Sd−1), we can define 〈α1, α2〉 , ∫ Sd−1 α1(ū)α2(ū)dū < ∞. Furthermore, we will use Vol(Sd−1) , ∫ Sd−1 1dū. Throughout this paper, we reserve the symbol X = [x1, . . . , xn] to denote the collection of datapoints (as a matrix), and Y = [y1, . . . , yn] to denote labels. We use d to denote the dimension of our data. We often use Θ to denote the parameters of a prediction function f , and f(Θ;x) to denote the prediction of f on datapoint x. We will use the notation .,& to mean less than or greater than up to a universal constant, respectively. Unless stated otherwise, O(·),Ω(·) denote some universal constant in upper and lower bounds, respectively. The notation poly denotes a universal constant-degree polynomial in the arguments. 2 WEAK REGULARIZER GUARANTEES MAX MARGIN SOLUTIONS In this section, we will show that when we add a weak regularizer to cross-entropy loss with a positive-homogeneous prediction function, the normalized margin of the optimum converges to some max-margin solution. As a concrete example, feedforward relu networks are positive-homogeneous. Let l be the number of labels, so the i-th example has label yi ∈ [l]. We work with a family F of prediction functions f(Θ; ·) : Rd → Rl that are a-positive-homogeneous in their parameters for some a > 0: f(cΘ;x) = caf(Θ;x),∀c > 0. We additionally require that f is continuous in Θ. For some general norm ‖ · ‖, we study the λ-regularized cross-entropy loss Lλ, defined as Lλ(Θ) , n∑ i=1 − log exp(fyi(Θ;xi))∑l j=1 exp(fj(Θ;xi)) + λ‖Θ‖r (2.1) for fixed r > 0. Let Θλ ∈ arg minLλ(Θ).1 We define the normalized margin of Θλ as: γλ , min i ( fyi(Θ̄λ;xi)−max j 6=yi fj(Θ̄λ;xi) ) (2.2) Define the ‖ · ‖-max normalized margin as γ? , max ‖Θ‖≤1 [ min i ( fyi(Θ;xi)−max j 6=yi fj(Θ;xi) )] and let Θ? be a parameter achieving this maximum. We show that with sufficiently small regularization level λ, the normalized margin γλ approaches the maximum margin γ?. Our theorem and proof are inspired by the result of Rosset et al. (2004a;b), who analyze the special case when f is a linear predictor. In contrast, our result can be applied to non-linear f as long as f is homogeneous. Theorem 2.1. Assume the training data is separable by a network f(Θ?; ·) ∈ F with an optimal normalized margin γ? > 0. Then, the normalized margin of the global optimum of the weaklyregularized objective (equation 2.1) converges to γ? as the strength of the regularizer goes to zero. Mathematically, let γλ be defined in equation 2.2. Then γλ → γ? as λ→ 0 1We formally show that Lλ has a minimizer in Claim A.1 of Section A. An intuitive explanation for our result is as follows: because of the homogeneity, the loss L(Θλ) roughly satisfies the following (for small λ, and ignoring problem parameters such as n): Lλ(Θλ) ≈ exp(−‖Θλ‖aγλ) + λ‖Θλ‖r Thus, the loss selects parameters with larger margin, while the regularization favors parameters with a smaller norm. The full proof of the theorem is deferred to Section A.1. Theorem 2.1 applies to feedforward relu networks and states that global minimizers of the weaklyregularized loss will obtain a maximum margin among all networks of the given architecture. By considering global minimizers, Theorem 2.1 provides a framework for directly analyzing generalization properties of the solution without considering details of the optimization algorithm. In Section 3 we leverage this framework and existing generalization bounds (Golowich et al., 2017) to provide a clean argument that over-parameterization can improve generalization. We can also provide an analogue of Theorem 2.1 for the binary classification setting. For this setting, our prediction is now a single real output and we train using logistic loss. We provide formal definitions and results in Section A.2. Our study of the generalization properties of the max-margin (see Section 3 and Section 4) is based in this setting. 2.1 OPTIMIZATION ACCURACY Since Lλ is typically hard to optimize exactly for neural nets, we study how accurately we need to optimize Lλ to obtain a margin that approximates γ? up to a constant. The following theorem shows that it suffices to find Θ′ achieving a constant factor multiplicative approximation of Lλ(Θλ), where λ is some sufficiently small polynomial in n, l, γ?. Though our theorem is stated for the general multi-class setting, it also applies for binary classification. We provide the proof in Section A.3. Theorem 2.2. In the setting of Theorem 2.1, suppose that we choose λ = exp(−(2r/a − 1)−a/r) (γ ?)r/a nc(l − 1)c for sufficiently large c (that only depends on r/a). For β ≤ 2, let Θ′ denote a β-approximate minimizer of Lλ, so Lλ(Θ′) ≤ βLλ(Θλ). Denote the normalized margin of Θ′ by γ′. Then γ′ ≥ γ ? 10 · βa/r . 3 GENERALIZATION PROPERTIES OF A MAXIMUM MARGIN NEURAL NETWORK In Section 2 we showed that optimizing a weakly-regularized logistic loss leads to the maximum normalized margin. We now study the direct implications of this result on the generalization properties of the solution. Specifically, we use existing Rademacher complexity bounds of Golowich et al. (2017) to present a generalization bound that depends on the network architecture only through the inverse `2-normalized margin and depth of the network (see Proposition 3.1). Next, we combine this bound with Theorem 2.1 to conclude that parameters obtained by optimizing logistic loss with weak `2-regularization will have a generalization bound that scales with the inverse of the maximum possible margin and depth. Finally, we note that the maximum possible margin can only increase as the size of the network grows, which suggests that increasing the size of the network improves the generalization of the solution (see Theorem 3.3). We consider depth-K neural networks with 1-Lipschitz, 1-positive-homogeneous activation φ for K ≥ 2. Suppose that the collection of parameters Θ is given by matrices W1, . . . ,WK . The K-layer network will compute a real-valued score f(Θ;x) ,WKφ(WK−1φ(· · ·φ(W1x) · · · )) (3.1) where we overload notation to let φ(·) denote the element-wise application of the activation φ. Let mi denote the size of the i-th hidden layer, so W1 ∈ Rm1×d,W2 ∈ Rm2×m1 , · · · ,WK ∈ R1×mK−1 . We will letM , (m1, . . . ,mK−1) denote the sequence of hidden layer sizes. We will focus on `2-regularized loss. The weakly-regularized logistic loss of the depth-K architecture with hidden layer sizesM is therefore Lλ,M(Θ) , 1 n n∑ i=1 log(1 + exp(−yif(Θ;xi))) + λ‖Θ‖2F (3.2) We note that f is K-homogeneous in Θ, so the results of Section 2 apply to Lλ,M.2 Following our conventions from Section 2, we denote the optimizer of Lλ,M by Θλ,M, the normalized margin of Θλ,M by γλ,M, the max-margin solution by Θ?,M, and the max-margin by γ?,M. Our notation emphasizes the architecture of the network. Since the classifier f now predicts a single real value, we need to redefine γλ,M , min i yif(Θ̄λ,M;xi) γ?,M , max ‖Θ‖2≤1 min i yif(Θ;xi) When the data is not separable by a neural network with architectureM, we define γ?,M to be zero. Recall that X = [x1, . . . , xn] denotes the matrix with all the data points as columns, and Y = [y1, . . . , yn] denotes the labels. We sample X and Y i.i.d. from the data generating distribution pdata, which is supported on X × {−1,+1}. We can define the population 0-1 loss and training 0-1 loss of the network parametrized by Θ by L(Θ) = Pr (x,y)∼pdata [yf(Θ;x) ≤ 0] Let C , supx∈X ‖x‖2 be an upper bound on the norm of a single datapoint. Proposition 3.1 shows that the generalization error only depends on the parameters through the inverse of the margin on the training data. We obtain Proposition 3.1 by applying Theorem 1 of Golowich et al. (2017) with the standard technique of using margin loss to bound classification error. There exist other generalization bounds which depend on the margin and some normalization (Neyshabur et al., 2015b; 2017a; Bartlett et al., 2017; Neyshabur et al., 2018); we choose the bounds of Golowich et al. (2017) because they fit well with `2 normalization. In the two-layer case K = 2, the bound below also follows from Neyshabur et al. (2015b). Proposition 3.1. [Straightforward consequence of Golowich et al. (2017, Theorem 1)] Suppose φ is 1-Lipschitz and 1-positive-homogeneous. For any depth-K network f(Θ; ·) separating the data with normalized margin γ , mini yif(Θ̄;xi) > 0, with probability at least 1− δ over the draw of X,Y , L(Θ) . C γK(K−1)/2 √ n + (γ) (3.3) where (γ) , √ log log2 4C γ n + √ log(1/δ) n . Note that (γ) is typically small, and thus the above bound mainly scales with C γK(K−1)/2 √ n . 3 For completeness, we state the proof in Section C.1. By combining this bound with our Theorem 2.1 we can conclude that optimizing weakly-regularized logistic loss gives us generalization error bounds that depend on the maximum possible margin of a network with the given architecture. Corollary 3.2. In the setting of Proposition 3.1, with probability 1− δ, lim sup λ→0 L(Θλ,M) . C γ?,MK(K−1)/2 √ n + (γ?,M) (3.4) where (γ) is defined as in Proposition 3.1. Above we implicitly assume γ?,M > 0, since otherwise the right hand side of the bound is vacuous. 2Although Theorem 2.1 is written in the language of multi-class prediction where the classifier outputs l ≥ 2 scores, the results translate to single-output binary classification. See Section A.2. 3Although the 1 K(K−1)/2 factor of equation 3.3 decreases with depth K, the margin γ will also tend to decrease as the constraint ‖Θ̄‖F ≤ 1 becomes more stringent. By applying Theorem 2.2 with Proposition 3.1, we can also conclude that optimizing Lλ,M within a constant factor gives a margin, and therefore generalization bound, approximating the best possible. One consequence of Corollary 3.2 is that optimizing weakly-regularized logistic loss results in the best possible generalization bound out of all models with the given architecture. This indicates that the widely used algorithm of optimizing deep networks with `2-regularized logistic loss has an implicit bias towards solutions with good generalization. Next, we observe that the maximum normalized margin is non-decreasing with the size of the architecture. Formally, for two depth-K architecturesM = (m1, . . . ,mK−1) andM′ = (m′1, . . . ,m′K−1), we sayM ≤ M′ if mi ≤ m′i ∀i = 1, . . .K − 1. Theorem 3.3 states that ifM ≤ M′, then the max-margin over networks with architecture M′ is at least the max-margin over networks with architectureM. Theorem 3.3. Recall that γ?,M denotes the maximum normalized margin of a network with architectureM. IfM≤M′, we have γ?,M ≤ γ?,M′ . As a important consequence, the generalization error bound of Corollary 3.2 forM′ is at least as good as that forM. This theorem is simple to prove and follows because we can directly implement any network of architectureM using one of architectureM′, ifM ≤M′. This can explain why additional overparameterization has been empirically observed to improve generalization in two-layer networks (Neyshabur et al., 2017b): the margin does not decrease with a larger network size, and therefore Corollary 3.2 gives a better generalization bound. In Section 6, we provide empirical evidence that the test error decreases with larger network size while the margin is non-decreasing. The phenomenon in Theorem 3.3 contrasts with standard `2-normalized linear prediction. In this setting, adding more features increases the norm of the data, and therefore the generalization error bounds could also increase. On the other hand, Theorem 3.3 shows that adding more neurons (which can be viewed as learned features) can only improve the generalization of the max-margin solution. 4 NEURAL NET MAX-MARGIN VS. KERNEL METHODS We will continue our study of the max-margin neural network via comparison against kernel methods, a context in which margins have already been extensively studied. We show that two-layer networks can obtain a larger margin, and therefore better generalization guarantees, than kernel methods. Our comparison between the two methods is motivated by an equivalence between the `2 max-margin of an infinite-width two-layer network and the `1-SVM (Zhu et al., 2004) over the lifted feature space defined by the activation function applied to all possible hidden units (Neyshabur et al., 2014; Rosset et al., 2007; Bengio et al., 2006). The kernel method corresponds to the `2-SVM in this same feature space, and is equivalent to fixing random hidden layer weights and solving an `2-SVM over the top layer. In Theorem 4.3, we construct a distribution for which the generalization upper bounds for the `1-SVM on this feature space are smaller than those for the `2-SVM by a Ω( √ d) factor. Our work provides evidence that optimizing all layers of a network can be beneficial for generalization. There have been works that compare `1 and `2-regularized solutions in the context of feature selection and construct a feature space for which a generalization gap exists (e.g., see Ng (2004)). In contrast, we work in the fixed feature space of relu activations, which makes our construction particularly challenging. We will usem to denote the width of the single hidden layer of the network. Following the convention from Section 3, we will use γ?,m to denote the maximum possible normalized margin of a two-layer network with hidden layer size m (note the emphasis on the size of the single hidden layer). The depth K = 2 case of Corollary 3.2 immediately implies that optimizing weakly-regularized `2 loss over width-m two-layer networks gives parameters whose generalization upper bounds depend on the hidden layer size only through 1/γ?,m. Furthermore, from Theorem 3.3 it immediately follows that γ?,1 ≤ γ?,2 ≤ · · · ≤ γ?,∞ The work of Neyshabur et al. (2014) links γ?,m to the `1 SVM over a lifted space. Formally, we define a lifting function ϕ : Rd → L∞(Sd−1) mapping data to an infinite feature vector: x ∈ Rd → ϕ(x) ∈ L∞(Sd−1) satisfying ϕ(x)[ū] = φ(ū>x) (4.1) where φ is the activation of Section 3. We look at the margin of linear functionals corresponding to α ∈ L1(Sd−1) . The 1-norm SVM (Zhu et al., 2004) over the lifted feature ϕ(x) solves for the maximum margin: γ`1 ,max α min i∈[n] yi〈α,ϕ(xi)〉 subject to ‖α‖1 ≤ 1 (4.2) where we rely on the inner product and 1-norm defined in Section 1.2. This formulation is equivalent to a hard-margin optimization on “convex neural networks” (Bengio et al., 2006). Bach (2017) also study optimization and generalization of convex neural networks. Using results from Rosset et al. (2007); Neyshabur et al. (2014); Bengio et al. (2006), our Theorem 2.1 implies that optimizing weaklyregularized logistic loss over two-layer networks is equivalent to solving equation 4.2 when the size of the hidden layer is at least n + 1, where n is the number of training examples. Proposition 4.1 essentially restates this with the minor improvement that this equivalence4 also holds when the size of the hidden layer is n. Proposition 4.1. Let γ`1 be defined in equation 4.2. Then γ`1 2 = γ ?,n = · · · = γ?,∞. For completeness, we prove Proposition 4.1 in Section B, relying on the work of Tibshirani (2013) and Rosset et al. (2004a). Importantly, the `1-max margin on the lifted feature space is obtainable by optimizing a finite neural network. We compare this to the `2 margin attainable via kernel methods. Following the setup of equation 4.2, we define the kernel problem over α ∈ L2(Sd−1): γ`2 ,max α min i∈[n] yi〈α,ϕ(xi)〉 subject to √ κ‖α‖2 ≤ 1 (4.3) where κ , Vol(Sd−1). (We scale ‖α‖2 by √ κ to make the lemma statement below cleaner.) First, γ`2 can be used to obtain a standard upper bound on the generalization error of the kernel SVM. Following the notation of Section 3, we will let L`2-svm denote the 0-1 population classification error for the optimizer of equation 4.3. Lemma 4.2. In the setting of Proposition 3.1, with probability at least 1−δ, the generalization error of the standard kernel SVM with relu feature (defined in equation 4.3) is bounded by L`2-svm . C γ`2 √ dn + `2 (4.4) where `2 , √ log max { log2 C√ dγ`2 ,2 } n + √ log(1/δ) n is typically a lower-order term. The bound above follows from standard techniques (Bartlett & Mendelson, 2002), and we provide a full proof in Section C.2. We construct a data distribution for which this lemma does not give a good bound for kernel methods, but Corollary 3.2 does imply good generalization for two-layer networks. Theorem 4.3. There exists a data distribution pdata such that the `1 SVM with relu features has a good margin: γ`1 & 1 and with probability 1− δ over the choice of i.i.d. samples from pdata, obtains generalization error L`1-svm . √ d log n n + `1 where `1 , √ log(1/δ) n is typically a lower order term. Meanwhile, with high probability the `2 SVM has a small margin: γ`2 . max {√ logn n , 1/d } and therefore the generalization upper bound from 4The factor of 1 2 is due the the relation that every unit-norm parameter Θ corresponds to an α in the lifted space with ‖α‖ = 2. Lemma 4.2 is at least Ω ( min { 1, d √ log n n }) In particular, the `2 bound is larger than the `1 bound by a Ω( √ d) factor. Although Theorem 4.3 compares upper bounds, our construction highlights properties of distributions which result in better neural network generalization than kernel method generalization. Furthermore, in Section 6 we empirically validate the gap in generalization between the two methods. We briefly overview the construction of pdata here. The full proof is in Section D.1. Proof sketch for Theorem 4.3. We base pdata on the distribution D of examples (x, y) described below. Here ei is the i-th standard basis vector and we use x>ei to represent the i-coordinate of x (since the subscript is reserved to index training examples).e > 3 x ... e>d x ∼ N (0, Id−2), and y = +1, x>e1 = +1, x >e2 = +1 w/ prob. 1/4 y = +1, x>e1 = −1, x>e2 = −1 w/ prob. 1/4 y = −1, x>e1 = +1, x>e2 = −1 w/ prob. 1/4 y = −1, x>e1 = −1, x>e2 = +1 w/ prob. 1/4 Figure 1 shows samples from D when there are 3 dimensions. From the visualization, it is clear that there is no linear separator for D. As Lemma D.1 shows, a relu network with four neurons can fit this relatively complicated decision boundary. On the other hand, for kernel methods, we prove that the symmetries in D induce cancellation in feature space. As a result, the features are less predictive of the true label and the margin will therefore be small. We formalize this argument in Section D.1. Gap in regression setting: We are able to prove an even larger Ω( √ n/d) gap between neural networks and kernel methods in the regression setting where we wish to interpolate continuous labels. Analogously to the classification setting, optimizing a regularized squared error loss on neural networks is equivalent to solving a minimum 1-norm regression problem (see Theorem D.5). Furthermore, kernel methods correspond to a minimum 2-norm problem. We construct distributions pdata where the 1-norm solution will have a generalization error bound of O( √ d/n), whereas the 2- norm solution will have a generalization error bound that is Ω(1) and thus vacuous. In Section D.2, we define the 1-norm and 2-norm regression problems. In Theorem D.10 we formalize our construction. 5 PERTURBED WASSERSTEIN GRADIENT FLOW FINDS GLOBAL OPTIMIZERS IN POLYNOMIAL TIME In the prior section, we studied the limiting behavior of the generalization of a two-layer network as its width goes to infinity. In this section, we will now study the limiting behavior of the optimization algorithm, gradient descent. Prior work (Mei et al., 2018; Chizat & Bach, 2018) has shown that as the hidden layer size grows to infinity, gradient descent for a finite neural network approaches the Wasserstein gradient flow over distributions of hidden units (defined in equation 5.1). Chizat & Bach (2018) assume the gradient flow converges, a non-trivial assumption since the space of distributions is infinite-dimensional, and given the assumption prove that Wasserstein gradient flow converges to a global optimizer in this setting, but do not specify a convergence rate. Mei et al. (2018) show global convergence for the infinite-neuron limit of stochastic Langevin dynamics, but also do not provide a convergence rate. We show that a perturbed version of Wasserstein gradient flow converges in polynomial time. The informal take-away of this section is that a perturbed version of gradient descent converges in polynomial time on infinite-size neural networks (for the right notion of infinite-size.) Formally, we optimize the following functional over distributions ρ on Rd+1: L[ρ] , R (∫ Φdρ ) + ∫ V dρ where Φ : Rd+1 → Rk, R : Rk → R, and V : Rd+1 → R. In this work, we consider 2-homogeneous Φ and V . We will additionally require that R is convex and nonnegative and V is positive on the unit sphere. Finally, we need standard regularity assumptions on R,Φ, and V : Assumption 5.1 (Regularity conditions on Φ, R, V ). Φ and V are differentiable as well as upper bounded and Lipschitz on the unit sphere. R is Lipschitz and its Hessian has bounded operator norm. We provide more details on the specific parameters (for boundedness, Lipschitzness, etc.) in Section E.1. We note that relu networks satisfy every condition but differentiability of Φ.5 We can fit a neural network under our framework as follows: Example 5.2 (Logistic loss for neural networks). We interpret ρ as a distribution over the parameters of the network. Let k , n and Φi(θ) , wφ(u>xi) for θ = (w, u). In this case, ∫ Φdρ is a distributional neural network that computes an output for each of the n training examples (like a standard neural network, it also computes a weighted sum over hidden units). We can compute the distributional version of the regularized logistic loss in equation 3.2 by setting V (θ) , λ‖θ‖22 and R(a1, . . . , an) , ∑n i=1 log(1 + exp(−yiai)). We will define L′[ρ] : Rd+1 → R with L′[ρ](θ) , 〈R′( ∫ Φdρ),Φ(θ)〉 + V (θ) and v[ρ](θ) , −∇θL′[ρ](θ). Informally, L′[ρ] is the gradient of L with respect to ρ, and v is the induced velocity field. For the standard Wasserstein gradient flow dynamics, ρt evolves according to d dt ρt = −∇ · (v[ρt]ρt) (5.1) where ∇· denotes the divergence of a vector field. For neural networks, these dynamics formally define continuous-time gradient descent when the hidden layer has infinite size (see Theorem 2.6 of Chizat & Bach (2018), for instance). We propose the following modification of the Wasserstein gradient flow dynamics: d dt ρt = −σρt + σUd −∇ · (v[ρt]ρt) (5.2) where Ud is the uniform distribution on Sd. In our perturbed dynamics, we add very small uniform noise over Ud, which ensures that at all time-steps, there is sufficient mass in a descent direction for the algorithm to decrease the objective. For infinite-size neural networks, one can informally interpret this as re-initializing a very small fraction of the neurons at every step of gradient descent. We prove convergence to a global optimizer in time polynomial in 1/ , d, and the regularity parameters. Theorem 5.3 (Theorem E.4 with regularity parameters omitted). Suppose that Φ and V are 2- homogeneous and the regularity conditions of Assumption 5.1 are satisfied. Also assume that from starting distribution ρ0, a solution to the dynamics in equation 5.2 exists. Define L? , infρ L[ρ]. Let > 0 be a desired error threshold and choose σ , exp(−d log(1/ )poly(k, L[ρ0]− L?)) and t , d 2 4 poly(log(1/ ), k, L[ρ0]− L ?), where the regularity parameters for Φ, V , and R are hidden in the poly(·). Then, perturbed Wasserstein gradient flow converges to an -approximate global minimum in t time: min 0≤t≤t L[ρt]− L? ≤ . We provide a theorem statement that includes regularity parameters in Section E.1. We prove the theorem in Section E.2. As a technical detail, Theorem 5.3 requires that a solution to the dynamics exists. We can remove this assumption by analyzing a discrete-time version of equation 5.2: ρt+1 , ρt + η(−σρt + σUd −∇ · (v[ρt]ρt)) and additionally assuming Φ and V have Lipschitz gradients. In this setting, a polynomial time convergence result also holds. We state the result in Section E.3. An implication of our Theorem 5.3 is that for infinite networks, we can optimize the weaklyregularized logistic loss in time polynomial in the problem parameters and λ−1. By Theorem 2.2, we only require λ−1 = poly(n) to approximate the maximum margin within a constant factor. Thus, for infinite networks, we can approximate the max margin within a constant factor in polynomial time. 5The relu activation is non-differentiable at 0 and hence the gradient flow is not well-defined. Chizat & Bach (2018) acknowledge this same difficulty with relu. 6 SIMULATIONS We first compare the generalization of neural networks and kernel methods for classification and regression. In Figure 2 we plot the generalization error and predicted generalization upper bounds6 of a trained neural network against a `2 kernel method with relu features as we vary n. Our data comes from a synthetic distribution generated by a neural network with 6 hidden units; we provide a detailed setup in Section F.1. For classification we plot 0-1 error, whereas for regression we plot squared error. The variance in the neural network generalization bound for classification likely occured because we did not tune learning rate and training time, so the optimization failed to find the best margin. The plots show that two-layer networks clearly outperform kernel methods in test error as n grows. However, there seems to be looseness in the bounds: the kernel generalization bound appears to stay constant with n (as predicted by our theory for regression), but the test error decreases. We also plot the dependence of the test error and margin on the hidden layer size in Figure 3 for synthetic data generated from a ground truth network with 10 hidden units and also MNIST. The plots indicate that test error is decreasing in hidden layer size while margin is increasing, as Theorem 3.3 predicts. We provide more details on the experimental setup in Section F.2. In Section F.3, we verify the convergence of a simple neural network to the max-margin solution as regularization decreases. In Section F.4, we train modified WideResNet architectures on CIFAR10 and CIFAR100. Although ResNet is not homogeneous, we still report improvements in generalization from annealing the weight decay during training, versus staying at a fixed decay rate. 7 CONCLUSION We have made the case that maximizing margin is one of the inductive biases of relu networks obtained from optimizing weakly-regularized cross-entropy loss. Our framework allows us to directly analyze generalization properties of the network without considering the optimization algorithm used to obtain it. Using this perspective, we provide a simple explanation for why over-parametrization can improve generalization. It is a fascinating question for future work to characterize other generalization properties of the max-margin solution. On the optimization side, we make progress towards understanding over-parametrized gradient descent by analyzing infinite-size neural networks. A natural direction for future work is to apply our theory to optimize the margin of finite-sized neural networks. 6We compute the leading term that is linear in the norm or inverse margin from the bounds in Proposition 3.1 and Lemmas 4.2, D.8, and D.9. A MISSING PROOFS IN SECTION 2 We first show that Lλ does indeed have a global minimizer. Claim A.1. In the setting of Theorems 2.1 and A.3, arg minΘ Lλ(Θ) exists. Proof. We will argue in the setting of Theorem 2.1 where Lλ is the multi-class cross entropy loss, because the logistic loss case is analogous. We first note that Lλ is continuous in Θ because f is continuous in Θ and the term inside the logarithm is always positive. Next, define b , infΘ Lλ(Θ) > 0. Then we note that for ‖Θ‖ > (b/λ)1/r , M , we must have Lλ(Θ) > b. It follows that inf‖Θ‖≤M Lλ(Θ) = infΘ Lλ(Θ). However, there must be a value Θλ which attains inf‖Θ‖≤M Lλ(Θ), because {Θ : ‖Θ‖ ≤ M} is a compact set and Lλ is continuous. Thus, infΘ Lλ(Θ) is attained by some Θλ. A.1 MISSING PROOFS FOR MULTI-CLASS SETTING Towards proving Theorem 2.1, we first show as we decrease λ, the norm of the solution ‖Θλ‖ grows. Lemma A.2. In the setting of Theorem 2.1, as λ→ 0, we have ‖Θλ‖ → ∞. To prove Theorem 2.1, we rely on the exponential scaling of the cross entropy: Lλ can be lower bounded roughly by exp(−‖Θλ‖γλ), but also has an upper bound that scales with exp(−‖Θλ‖γ?). By Lemma A.2, we can take large ‖Θλ‖ so the gap γ?−γλ vanishes. This proof technique is inspired by that of Rosset et al. (2004a). Proof of Theorem 2.1. For any M > 0 and Θ with γΘ , mini ( f(Θ̄;xi)−maxj 6=yi f(Θ̄;xi) ) , Lλ(MΘ) = 1 n n∑ i=1 − log exp(M afyi(Θ;xi))∑l j=1 exp(M afj(Θ;xi)) + λMr‖Θ‖r (by the homogeneity of f ) = 1 n n∑ i=1 − log 1 1 + ∑ j 6=yi exp(M a(fj(Θ;xi)− fyi(Θ;xi))) + λMr‖Θ‖r (A.1) ≤ log(1 + (l − 1) exp(−MaγΘ)) + λMr‖Θ‖r (A.2) We can also apply ∑ j 6=yi exp(M a(fj(Θ;xi) − fyi(Θ;xi))) ≥ max exp(Ma(fj(Θ;xi) − fyi(Θ;xi))) = exp γΘ in order to lower bound equation A.1 and obtain Lλ(MΘ) ≥ 1 n log(1 + exp(−MaγΘ)) + λMr‖Θ‖r (A.3) Applying equation A.2 with M = ‖Θλ‖ and Θ = Θ?, noting that ‖Θ?‖ ≤ 1, we have: Lλ(Θ ?‖Θλ‖) ≤ log(1 + (l − 1) exp(−‖Θλ‖aγ?)) + λ‖Θλ‖r (A.4) Next we lower bound Lλ(Θλ) by applying equation A.3, Lλ(Θλ) ≥ 1 n log(1 + exp(−‖Θλ‖aγλ)) + λ‖Θλ‖r (A.5) Combining equation A.4 and equation A.5 with the fact that Lλ(Θλ) ≤ Lλ(Θ?‖Θλ‖) (by the global optimality of Θλ), we have ∀λ > 0, n log(1 + (l − 1) exp(−‖Θλ‖aγ?)) ≥ log(1 + exp(−‖Θλ‖aγλ)) Recall that by Lemma A.2, as λ → 0, we have ‖Θλ‖ → ∞. Therefore, exp(−‖Θλ‖aγ?), exp(−‖Θλ‖aγλ) → 0. Thus, we can apply Taylor expansion to the equation above with respect to exp(−‖Θλ‖aγ?) and exp(−‖Θλ‖aγλ). If max{exp(−‖Θλ‖aγ?), exp(−‖Θλ‖aγλ)} < 1, then we obtain n(l − 1) exp(−‖Θλ‖aγ?) ≥ exp(−‖Θλ‖aγλ)−O(max{exp(−‖Θλ‖aγ?)2, exp(−‖Θλ‖aγλ)2}) We claim this implies that γ? ≤ lim infλ→0 γλ. If not, we have lim infλ→0 γλ < γ? , which implies that the equation above is violated with sufficiently large ‖Θλ‖ (‖Θλ‖ log(2(`− 1)n)1/a would suffice). By Lemma A.2, ‖Θλ‖ → ∞ as λ→ 0 and therefore we get a contradiction. Finally, we have γλ ≤ γ? by definition of γ?. Hence, limλ→0 γλ exists and equals γ?. Now we fill in the proof of Lemma A.2. Proof of Lemma A.2. For the sake of contradiction, we assume that ∃C > 0 such that for any λ0 > 0, there exists 0 < λ < λ0 with ‖Θλ‖ ≤ C. We will determine the choice of λ0 later and pick λ such that ‖Θλ‖ ≤ C. Then the logits (the prediction fj(Θ, xi) before softmax) are bounded in absolute value by some constant (that depends on C), and therefore the loss function − log exp(fyi (Θ;xi))∑l j=1 exp(fj(Θ;xi)) for every example is bounded from below by some constant D > 0 (depending on C but not λ.) Let M = λ−1/(r+1), we have that 0 < D ≤ Lλ(Θλ) ≤ Lλ(MΘ?) (by the optimality of Θλ) ≤ − log 1 1 + (l − 1) exp(−Maγ?) + λMr (by equation A.2) = log(1 + (l − 1) exp(−λ−a/(r+1)γ?)) + λ1/(r+1) ≤ log(1 + (l − 1) exp(−λ−a/(r+1)0 γ?)) + λ 1/(r+1) 0 Taking a sufficiently small λ0, we obtain a contradiction and complete the proof. A.2 FULL BINARY CLASSIFICATION SETTING For completeness, we state and prove our max-margin results for the setting where we fit binary labels yi ∈ {−1,+1} (as opposed to indices in [l]) and redefining f(Θ; ·) to assign a single real-valued score (as opposed to a score for each label). This lets us work with the simpler λ-regularized logistic loss: Lλ(Θ) , 1 n n∑ i=1 log(1 + exp(−yif(Θ;xi))) + λ‖Θ‖r As before, let Θλ ∈ arg minLλ(Θ), and define the normalized margin γλ by γλ , mini yif(Θ̄λ;xi). Define the maximum possible normalized margin γ? , max ‖Θ‖≤1 min i yif(Θ;xi) (A.6) Theorem A.3. Assume γ? > 0 in the binary classification setting with logistic loss. Then as λ→ 0, γλ → γ?. The proof follows via simple reduction to the multi-class case. Proof of Theorem A.3. We prove this theorem via reduction to the multi-class case with l = 2. Construct f̃ : Rd → R2 with f̃1(Θ;xi) = − 12f(Θ;xi) and f̃2(Θ;xi) = 1 2f(Θ;xi). Define new labels ỹi = 1 if yi = −1 and ỹi = 2 if yi = 1. Now note that f̃ỹi(Θ;xi)−f̃j 6=ỹi(Θ;xi) = yif(Θ;xi), so the multi-class margin for Θ under f̃ is the same as binary margin for Θ under f . Furthermore, defining L̃λ(Θ) , 1 n n∑ i=1 − log exp(f̃ỹi(Θ;xi))∑2 j=1 exp(f̃j(Θ;xi)) + λ‖Θ‖r we get that L̃λ(Θ) = Lλ(Θ), and in particular, L̃λ and Lλ have the same set of minimizers. Therefore we can apply Theorem 2.1 for the multi-class setting and conclude γλ → γ? in the binary classification setting. A.3 MISSING PROOF FOR OPTIMIZATION ACCURACY Proof of Theorem 2.2. Choose B , ( 1 γ? log (l−1)(γ?)r/a λ )1/a . We can upper bound Lλ(Θ′) by computing Lλ(Θ ′) ≤ βLλ(Θλ) ≤ βLλ(BΘ?) ≤ β log(1 + (l − 1) exp(−Baγ?)) + βλBr (by equation A.2) ≤ β(l − 1) exp(−Baγ?) + βλBr (using log(1 + x) ≤ x) ≤ β λ (γ?)r/a + βλ ( 1 γ? log (l − 1)(γ?)r/a λ )r/a ≤ β λ (γ?)r/a ( 1 + ( log (l − 1)(γ?)r/a λ )r/a) , L(UB) Furthermore, it holds that ‖Θ′‖r ≤ L (UB) λ . Now we note that Lλ(Θ ′) ≤ L(UB) ≤ 2β λ (γ?)r/a ( log (l − 1)(γ?)r/a λ )r/a ≤ 1 2n for sufficiently large c depending only on a/r. Now using the fact that log(x) ≥ x1+x ∀x ≥ −1, we additionally have the lower bound Lλ(Θ′) ≥ 1n log(1 + exp(−γ ′‖Θ′‖a)) ≥ 1n exp(−γ′‖Θ′‖a) 1+exp(−γ′‖Θ′‖a) . Since L(UB) ≤ 1, we can rearrange to get γ′ ≥ − log nLλ(Θ ′) 1−nLλ(Θ′) ‖Θ′‖a ≥ − log nL (UB) 1−nL(UB) ‖Θ′‖a ≥ − log(2nL (UB)) ‖Θ′‖a The middle inequality followed because x1−x is increasing in x for 0 ≤ x < 1, and the last because L(UB) ≤ 12n . Since − log 2nL (UB) > 0 we can also apply the bound ‖Θ′‖r ≤ L (UB) λ to get γ′ ≥ −λ a/r log 2nL(UB) (L(UB))a/r = − log ( 2nβ λ (γ?)r/a ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)) βa/r γ? ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r (by definition of L(UB)) ≥ γ ? βa/r log( (γ ?)r/a 2βnλ )( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r ︸ ︷︷ ︸ ♣ − log ( 1 + ( log (l−1)(γ ?)r/a λ )r/a) ( 1 + ( log (l−1)(γ ?)r/a λ )r/a)a/r ︸ ︷︷ ︸ ♥ We will first bound ♣. First note that log( (γ ?)r/a 2βnλ ) log (l−1)(γ ?)r/a λ = log (γ ?)r/a λ − log 2βn log (γ ?)r/a λ + log(l − 1) ≥ log (γ ?)r/a λ − log 2βn(l − 1) log (γ ?)r/a λ ≥ c− 3 c (A.7) where the last inequality follows from the fact that (γ ?)r/a λ ≥ n c(l − 1)c and β ≤ 2. Next, using the fact that log (γ ?)r/a λ ≥ 1 (2r/a−1)a/r , we note that( 1 + ( log (l − 1)(γ?)r/a λ )−r/a)a/r ≤ ( 1 + ( 1 (2r/a − 1)a/r )−r/a)a/r ≤ 2 (A.8) Combining equation A.7 and equation A.8, we can conclude that ♣ = log( (γ ?)r/a 2βnλ ) log (l−1)(γ ?)r/a λ ( 1 + ( log (l − 1)(γ?)r/a λ )−r/a)−a/r ≥ c− 3 2c Finally, we note that if 1 + ( log (l−1)(γ ?)r/a λ )r/a is a sufficiently large constant that depends only on a/r (which can be achieved by choosing c sufficiently large), it will follow that ♥ ≤ 110 . Thus, if c ≥ 5, we can combine our bounds on ♣ and ♥ to get that γ′ ≥ γ ? 10βa/r B MISSING PROOF OF PROPOSITION 4.1 Proposition 4.1 follows simply from applying Corollary 1 of Neyshabur et al. (2014) to a hard-margin SVM problem. For completeness, we provide another proof here. The proof of Proposition 4.1 will consist of two steps: first, show that equation 4.2 has an optimal solution with sparsity n, and second, show that sparse solutions to equation 4.2 can be mapped to a neural network with the same margin, and vice versa. The following lemma and proof are based on Lemma 14 of Tibshirani (2013). Lemma B.1. Let supp(α) , {ū : |α(ū)| > 0}. There exists an optimal solution α? to equation 4.2 with |supp(α?)| ≤ n. For the proof of this lemma, we find it convenient to work with a minimum norm formulation which we show is equivalent to equation 4.2: min α ‖α‖1 subject to yi〈α,ϕ(xi)〉 ≥ 1 ∀i (B.1) Claim B.2. Let S ⊂ L1(Sd−1) be the set of optimizers for equation 4.2, and let S′ ⊂ L1(Sd−1) be the set of optimizers for equation B.1. If equation B.1 is feasible, for any α ∈ S, αγ`1 ∈ S ′, and for any α′ ∈ S′, α ′ ‖α′‖1 ∈ S. Proof. Let opt′ denote the optimal objective for equation B.1. We note that α ′ ‖α′‖1 is feasible for equation 4.2 with objective 1opt′ , and therefore γ`1 ≥ 1 opt′ . Furthermore, 1 2γ`1 yi ∫ ū∈Sd−1 α(ū)φ(ū >xi)dū ≥ 1 ∀i, and so αγ`1 is feasible for equation B.1 with objective 1 γ`1 . Therefore, opt′ ≤ 1γ`1 . As a result, it must hold that opt ′ = 1γ`1 , which means that α ′ ‖α′‖1 is optimal for equation 4.2, and αγ`1 is optimal for equation B.1, as desired. First, note that if equation B.1 is not feasible, then γ`1 = 0 and equation 4.2 has a trivial sparse solution, the all zeros function. Thus, it suffices to show that an optimal solution to equation B.1 exists that is n-sparse, since by Lemma B.2 equation B.1 and equation 4.2 have equivalent solutions up to a scaling. We begin by taking the dual of equation B.1. Claim B.3. The dual of equation B.1 has form max λ∈Rn λ>~1 subject to ∣∣∣∣∣ n∑ i=1 λiyiφ(ū >xi) ∣∣∣∣∣ ≤ 1 ∀ū ∈ Sd−1 λi ≥ 0 For any primal optimal solution α? and dual optimal solution λ?, it must hold that n∑ i=1 λ?i yiφ(ū >xi) = sign(α ?(ū)) ⇐⇒ α?(ū) 6= 0 (B.2) Proof. The dual form can be solved for by computation. By strong duality, equation B.2 must follow from the KKT conditions. Now define the mapping v : Sd−1 → Rn with vi(ū) , yiφ(ū>xi). We will show a general result about linearly dependent v(ū) for ū ∈ supp(α?), after which we can reduce directly to the proof of Tibshirani (2013). Claim B.4. Let α? be any optimal solution. Suppose that there exists S ⊆ supp(α?) such that {v(ū) : ū ∈ S} forms a linearly dependent set, i.e.∑ ū∈S cūv(ū) = ~0 (B.3) for coefficients c. Then ∑ ū∈S cū sign(α ?(ū)) = 0. Proof. Let λ? be any dual optimal solution, then λ?>v(ū) = sign(α?(ū)) ∀ū ∈ supp(α?) by Claim B.3. Thus, we apply λ?> to both sides of equation B.3 to get the desired statement. Proof of Lemma B.1. The rest of the proof follows Lemma 14 in Tibshirani (2013). The lemma argues that if the conclusion of Claim B.4 holds and an optimal solution α? has S ⊆ supp(α?) with {v(ū) : ū ∈ S} linearly dependent, we can construct a new α′ with ‖α′‖1 = ‖α?‖1 and supp(α′) ⊂ supp(α?) (where the inclusion is strict). Thus, if we consider an optimal α? with minimal support, it must follow that {v(ū) : ū ∈ supp(α?)} is a linearly independent set, and therefore |supp(α?)| ≤ n. We can now complete the proof of Proposition 4.1. Proof of Proposition 4.1. For ease of notation, we will parametrize a two-layer network with m units by top layer weights w1, . . . , wm ∈ R and bottom layer weights u1, . . . , um ∈ Rd. As before, we use Θ to refer to the collection of parameters, so the network computes the real-valued function f(Θ;x) = m∑ j=1 wjφ(u > j x) Note that we simply renamed the variables from the parametrization of equation 3.1. We first apply Lemma B.1 to conclude that equation 4.2 admits a n-sparse optimal solution α?. Because of sparsity, we can now abuse notation and treat α? as a real-valued function such that∑ ū∈supp(α?) |α?(ū)| ≤ 1. We construct Θ corresponding to a two-layer network with m ≥ n hidden units and normalized margin at least γ`12 . For clarity, we let W correspond to the top layer weights and U correspond to the bottom layer weights. For every ū ∈ supp(α), we let Θ have a corresponding hidden unit j with (wj , uj) = ( sign(α?(ū)) √ |α?(ū)| 2 , √ |α?(ū)| 2 ū ) , and set the remaining hidden units to ~0. This is possible because m ≥ n. Now f(Θ;x) = m∑ j=1 wjφ(u > j x) = 1 2 ∑ ū∈supp(α?) α?(ū)φ(ū>x) Furthermore, ‖Θ‖22 = m∑ j=1 w2j + ‖uj‖22 = ∑ ū∈supp(α) |α?(ū)| 2 + |α?(ū)| 2 ‖ū‖22 = ∑ ū∈supp(α) |α?(ū)| ≤ 1 Thus it follows that Θ has normalized margin at least γ`1/2, so γ ?,m ≥ γ`1/2. To conclude, we show that γ?,m ≤ γ`1/2. Let Θ?,m denote the parameters obtaining optimal m-unit margin γ?,m with hidden units (w?,mj , u ?,m j ) for j ∈ [m]. We can construct α to put a scaled delta mass of 2w?,mj ‖u ?,m j ‖2 on ū ?,m j for j ∈ [m]. It follows that ‖α‖1 = m∑ j=1 2|w?,mj |‖u ?,m j ‖2 ≤ m∑ j=1 w?,mj 2 + ‖u?,mj ‖ 2 2 = ‖Θ?,m‖22 ≤ 1 Furthermore, ∫ Sd−1 α(ū)φ(ū>x) = 2 m∑ j=1 w?,mj ‖u ?,m j ‖2φ((ū ?,m j ) >x) = 2 m∑ j=1 w?,mj φ(u ?,m j > x) = 2f(Θ?,m;x) Thus, α is a feasible solution to equation 4.2 with objective value at least 2γ?,m. Therefore, γ`1 ≥ 2γ?,m, so γ?,m = γ`1/2. C RADEMACHER COMPLEXITY AND GENERALIZATION ERROR We prove the generalization error bounds stated in Proposition 3.1 and Lemma 4.2 via Rademacher complexity and margin theory. Assume that our data X,Y are drawn i.i.d. from ground truth distribution pdata supported on X × Y . For some hypothesis classF of real-valued functions, we define the empirical Rademacher complexity R̂(F) as follows: R̂(F) , 1 n E i [ sup f∈F n∑ i=1 if(xi) ] where i are independent Rademacher random variables. For a classifier f , following the notation of Section 3 we will use L(f) , Pr(x,y)∼pdata(yf(x) ≤ 0) to denote the population 0-1 loss of the classifier f . The following classical theorem (Koltchinskii et al., 2002), (Kakade et al., 2009) bounds generalization error in terms of the Rademacher complexity and margin loss. Theorem C.1 (Theorem 2 of Kakade et al. (2009)). Let (xi, yi)ni=1 be drawn iid from pdata. We work in the binary classification setting, so Y = {−1, 1}. Assume that for all f ∈ F , we have supx∈X f(x) ≤ C. Then with probability at least 1− δ over the random draws of the data, for every γ > 0 and f ∈ F , L(f) ≤ 1 n n∑ i=1 1(yif(xi) < γ) + 4R̂(F) γ + √ log log2 4C γ n + √ log(1/δ) 2n C.1 PROOF OF PROPOSITION 3.1 We will prove Proposition 3.1 by applying the Rademacher complexity bounds of Golowich et al. (2017) with Theorem C.1. First, we show the following lemma bounding the generalization of neural networks whose weight matrices have bounded Frobenius norms. Lemma C.2. Define the hypothesis class FK over depth-K neural networks by FK = { f(Θ; ·) : ‖Wj‖F ≤ 1√ K ∀j } Let C , supx∈X ‖x‖2. Recall that L(Θ) denotes the 0-1 population loss L(f(Θ; ·)). Then for any f(Θ; ·) ∈ FK classifying the training data correctly with unnormalized margin γΘ , mini yif(Θ;xi) > 0, with probability at least 1− δ, L(Θ) . C γΘK(K−1)/2 √ n + √ log log2 4C γΘ n + √ log(1/δ) n (C.1) Note the dependence on the unnormalized margin rather than the normalized margin. Proof. We first claim that supf(Θ;·)∈FK supx∈X f(Θ;x) ≤ C. To see this, for any f(Θ; ·) ∈ FK , f(Θ;x) = WKφ(· · ·φ(W1x) · · · ) ≤ ‖WK‖F ‖φ(WK−1φ(· · ·φ(W1x) · · · )‖2 ≤ ‖WK‖F ‖WK−1φ(· · ·φ(W1x) · · · )‖2 (since φ is 1-Lipschitz and φ(0) = 0, so φ performs a contraction) < ‖x‖2 ≤ C (repeatedly applying this argument and using ‖Wj‖F < 1) Furthermore, by Theorem 1 of Golowich et al. (2017), R̂(FK) has upper bound R̂(FK) . C K(K−1)/2 √ n Thus, we can apply Theorem C.1 to conclude that for all f(Θ; ·) ∈ FK and all γ > 0, with probability 1− δ, L(Θ) . 1 n n∑ i=1 1(yif(Θ;xi) < γ) + C γK(K−1)/2 √ n + √ log log2 4C γ n + √ log(1/δ) n In particular, by definition choosing γ = γΘ makes the first term on the LHS vanish and gives the statement of the lemma. Proof of Proposition 3.1. Given parameters Θ = (W1, . . . ,WK), we first construct parameters Θ̃ = (W̃1, . . . , W̃K) such that f(Θ̄; ·) and f(Θ̃; ·) compute the same function, and ‖W̃1‖2F = ‖W̃2‖2F = · · · = ‖W̃K‖2F ≤ 1K . To do this, we set W̃j = ( ∏K k=1 ‖Wk‖F )1/k ‖Wj‖F ‖Θ‖F Wj By construction ‖W̃j‖2F = ( ∏K k=1 ‖Wk‖2F )1/k ‖Θ‖2F = ( ∏K k=1 ‖Wk‖2F )1/k∑K k=1 ‖Wk‖2F ≤ 1 k (by the AM-GM inequality) Furthermore, we also have f(Θ̃;x) = W̃Kφ(· · ·φ(W̃1x) · · · ) = K∏ j=1 ( ∏K k=1 ‖Wk‖F )1/k ‖Wj‖F ‖Θ‖F WKφ(· · ·φ(W1x) · · · ) (by the homogeneity of φ) = 1 ‖Θ‖KF f(Θ;x) = f ( Θ ‖Θ‖F ;x ) (since f is K-homogeneous in Θ) = f(Θ̄;x) Now we note that by construction, L(Θ) = L(Θ̃). Now f(Θ̃; ·) must also classify the training data perfectly, has unnormalized margin γ, and furthermore f(Θ̃; ·) ∈ FK . As a result, Lemma C.2 allows us to conclude the desired statement. To conclude Corollary 3.2, we apply the above on Θλ,M and use Theorem A.3. C.2 PROOF OF KERNEL GENERALIZATION BOUNDS Let F2,φB denote the class of `2-bounded linear functionals in lifted feature space: F 2,φ B , {x 7→ 〈α,ϕ(x)〉 : α ∈ L2(Sd−1), ‖α‖2 ≤ B}. We abuse notation and write α ∈ F2,φB to indicate a linear functional from F2,φB . As before, we will use L(α) to indicate the 0-1 population loss of the classifier x 7→ 〈α,ϕ(x)〉 and let C , supx∈X ‖x‖2 be an upper bound on the norm of the data. We focus on analyzing the Rademacher complexity R̂(F2,φB ), mirroring derivations done in the past (Bartlett & Mendelson, 2002). We include our derivations here for completeness. Lemma C.3. R̂(F2,φB ) ≤ 1 nB √∑n i=1 ‖ϕ(xi)‖22. Proof. We write R̂(F2,φB ) = 1 n E i [ sup α∈F2,φB 〈α, n∑ i=1 iϕ(xi)〉 ] ≤ 1 n E i [ sup α∈F2,φB ‖α‖2 ∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 ] ≤ 1 n B · E i [∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 ] ≤ 1 n B √√√√√E i ∥∥∥∥∥ n∑ i=1 iϕ(xi) ∥∥∥∥∥ 2 2 (via Jensen’s inequality) ≤ 1 n B √√√√√E i n∑ i=1 n∑ j=1 i j〈ϕ(xi), ϕ(xi)〉 ≤ 1 n B √√√√ n∑ i=1 ‖ϕ(xi)‖22 (terms where i 6= j cancel out) As an example, we can apply this bound to relu features: Corollary C.4. Suppose that φ is the relu activation. Let κ , Vol(Sd−1). Then R̂(F2,φB ) . B‖X‖F √ κ n √ d ≤ BC √ κ√ dn . Proof. We first show that ‖ϕ(xi)‖22 = Θ ( κ d‖xi‖ 2 2 ) . We can compute ‖ϕ(xi)‖22 = Vol(Sd−1)Eū∼Sd−1 [relu(ū>xi)2] = κ d Eū∼Sd−1 [relu( √ dū>xi) 2] = κ d 1 M2 Eu∼N (0,Id×d)[relu(u Txi) 2] (M2 is the second moment of N (0, 1)) = Θ (κ d ‖xi‖22 ) (C.2) where the last line uses the computation provided in Lemma A.1 by Du et al. (2017). Now we plug this into Lemma C.3 to get the desired bound. We will now prove Lemma 4.2. Proof of Lemma 4.2. From equation C.2, we first obtain supx∈X ‖ϕ(x)‖2 . C √ κ d . Denote the optimizer for equation 4.3 by α`2 . Note that √ κα`2 ∈ F 2,φ 1 , and furthermore L(α`2) = L( √ κα`2). Since √ κα`2 has unnormalized margin √ κγ`2 , we apply Theorem C.1 on margin √ κγ`2 and hypothesis class F2,φ1 to get with probability 1− δ, L`2-svm = L( √ κα`2) ≤ 4R̂(F2,φ1 )√ κγ`2 + √ log log2 4 supx∈X ‖ϕ(x)‖2√ κγ`2 n + √ log(1/δ) 2n . C γ`2 √ dn + √√√√ log max{log2 C√dγ`2 , 2} n + √ log(1/δ) n (applying Corollary C.4) D MISSING PROOFS FOR COMPARISON TO KERNEL METHODS D.1 CLASSIFICATION In this section we will complete a proof of Theorem 4.3. Recall the construction of the distribution D provided in Section 4. We first provide a classifier of this data with small `1 norm. Lemma D.1. In the setting of Theorem 4.3, we have that γ`1 ≥ √ 2 4 . Proof. Consider the network f(x) = 14 ( (x>(e1 +e2)/ √ 2)+ +(x >(−e1−e2)/ √ 2)+− (x>(−e1 + e2)/ √ 2)+ − (x>(e1 − e2)/ √ 2)+ ) . The attained margin γ = √ 2 4 , so γ`1 ≥ √ 2 4 . Now we will upper bound the margin attainable by the `2 SVM. Lemma D.2 (Margin upper bound tool). In the setting of Theorem 4.3, we have γ`2 ≤ 1√ κ · ∥∥∥∥∥ 1n n∑ i=1 ϕ(xi)yi ∥∥∥∥∥ 2 Proof. By the definition of γ`2 , we have that for any α with √ κ‖α‖2 ≤ 1, we have γ`2 ≤ max√ κ‖α‖2≤1 1 n n∑ i=1 〈α, yiϕ(xi)〉 Setting α = 1√ κ 1 n ∑n i=1 ϕ(xi)yi/‖ 1 n ∑n i=1 ϕ(xi)yi‖2 completes the proof. (Attentive readers may realize that this is equivalent to setting the dual variable of the convex program 4.3 to all 1’s function.) Lemma D.3. In the setting of Theorem 4.3, let (xi, yi)ni=1 be n i.i.d samples and corresponding labels from D. Let ϕ be defined in equation 4.1 with φ = relu. With high probability (at least 1− dn−10), we have ∥∥∥∥∥ 1n n∑ i=1 ϕ(xi)yi ∥∥∥∥∥ 2 . √ κ/n log n+ √ κ/d Proof. Let Wi = ϕ(xi)yi. We will bound several quantities regarding Wi’s. In the rest of the proof, we will condition on the event E that ∀i, ‖xi‖22 . d log n. Note that E is a high probability event and conditioned on E, xi’s are still independent. We omit the condition on E in the rest of the proof for simplicity. We first show that assuming the following three inequalities that the conclusion of the Lemma follows. 1. ∀i, ‖Wi‖22 . κ log n . 2. σ2 , Var[ ∑ iWi] , ∑n i=1 E[‖Wi − EWi‖22] . nκ log n 3. ‖E [ ∑ Wi] ‖2 . √ κn/d. By bullets 1, 2, and Bernstein inequality, we have that with probability at least 1− dn−10 over the randomness of the data (X,Y ),∥∥∥∥∥ n∑ i=1 Wi − E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 . √ κ log1.5 n+ √ nκ log2 n . √ nκ log2 n By bullet 3 and equation above, we complete the proof with triangle inequality:∥∥∥∥∥ n∑ i=1 Wi ∥∥∥∥∥ 2 ≤ ∥∥∥∥∥E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 + √ nκ log2 n . √ nκ log2 n+ √ κn/d Therefore, it suffices to prove bullets 1, 2 and 3. Note that 2 is a direct corollary of 1 so we will only prove 1 and 3. We start with 3: By the definition of the `2 norm in L2(Sd−1) and the independence of (xi, yi)’s, we can rewrite∥∥∥∥∥E [ n∑ i=1 Wi ]∥∥∥∥∥ 2 2 = κ · n2 E ū∼Sd−1 [ E (x,y)∼D ϕ(x)[ū] · y ]2 (D.1) Let ū = (ū1, . . . , ūd) and ū−2 = (ū3, . . . , ūd) ∈ Rd−2, and define τ
1. What is the main contribution of the paper regarding the relationship between regularized solutions and maximum margin separators? 2. How does the paper extend previous results for linear models to any homogeneous function? 3. What is the significance of Theorem 2.2 in bounding the deviation of margin when the regularization is not driven to zero? 4. How does the paper connect generalization bounds of learned parameters with l2 margins? 5. Is there anything new in the proof of Theorem 3.2 regarding the increase in margin with more hidden units? 6. Can you explain the similarity and difference between the paper's result (Theorem 3.3) and Corollary 1 in Neyshabur et al. (2014)?
Review
Review Overall I found that the paper does not clearly compare the results to existing work. There are some new results, but some of the results stated as theorems are immediate consequence of existing work and a more detailed discussion and comparison is warranted. I will first give detailed comments on the establishing the relationship to existing work and then summarize my evaluation. ———— Detailed comments on contributions and relationships to existing work. A. Theorem 2.1 establishes the limit of the regularized solutions as the maximum margin separator. This result is a generalization the analogous results for linear models Theorem 3 in Rosset et al. (2004) “Boosting as a regularized path to maximum margin separator” and Thm 2.1 in Rosset Zhu Hastie “margin maximizing loss functions” (the later paper missing from references, and that paper generalizes the earlier result for multi-class cross entropy loss). Main difference from earlier work: 1. extends the results for linear models to any homogeneous function 2. (minor) the previous results by Rosset et al. were stated only for lp norms, but this is a minor generalization since the earlier work didn’t at any point use the lp-ness of the norm and immediately extends for any norms. Secondly, Theorem 2.2 also gives a bound on deviation of margin when the regularization is not driven all the way to 0. I do think this theorem would be differently stated by making the explicitly showing dependence of suboptimal margin \gamma’ on lambda and the sub optimality constant of loss. This way, one can derive 2.1 as a special case and also reason about what level of sub-optimality of loss can be tolerated. B. Theorem 3.1 derives generalization bounds of learned parameters in terms of l2 margin. —this and many similar results connecting generalization to margins have already been studied in the literature (Neyshabur et al. 2015b for example covers a larger family of norms than just l2 norm). Specially an analogous bound for l1 margin can also be found in these work which can be used in the discussions that follow. C. Theorem 3.2: This result to my knowledge is new, but also pretty immediate from definition of margin. The proof essentially follows by showing that having more hidden units can only increase the margin since the margin is maximized over a larger set of parameters. D. Comparison to kernel machines: Theorem 3.3 seems to be the paraphrasing of corollary 1 in Neyshabur et al (2014). But the authors claim that the Theorem 3.3 also holds when “the regularizer is small”. I do not understand what the authors are referring to here or how the result is different form existing work. Please clarify ----------- In summary, The 2.1-2.2 on extension of the connection between regularized solution and maximum margin solution to general homogeneous models and to non-asymptotic regimes -- this is in my opinion key contribution of the paper and an important result. But there is not much new technique in terms of proof here
ICLR
Title A new accelerated gradient method inspired by continuous-time perspective Abstract Nesterov’s accelerated method are widely used in problems with machine learning background including deep learning. To give more insight about the acceleration phenomenon, an ordinary differential equation was obtained from Nesterov’s accelerated method by taking step sizes approaching zero, and the relationship between Nesterov’s method and the differential equation is still of research interest. In this work, we give the precise order of the iterations of Nesterov’s accelerated method converging to the solution of derived differential equation as step sizes go to zero. We then present a new accelerated method with higher order. The new method is more stable than ordinary method for large step size and converges faster. We further apply the new method to matrix completion problem and show its better performance through numerical experiments. 1 Introduction Optimization is a core component of statistic and machine learning problems. Recently, gradientbased algorithms are widely used in such optimization problems due to its simplicity and efficiency for large-scale situations. For solving convex optimization problem min x∈Rd F (x), where F (x) is convex and sufficiently smooth, a classical first-order method is gradient descent. We assume that f(x) = ∇F (x) satisfies L-Lipschitz condition, that is, there exists constant L such that ∥f(x)− f(y)∥ ≤ L∥x− y∥, ∀x, y. Under these conditions, gradient descent achieves a convergence rate of O(n−1), i.e., ∥F (xn) − F (x∗)∥ decreases to zero at a rate of O(n−1), where xn denotes the nth iteration and x∗ denotes the minimum point of F (x) in Rd. Nesterov’s accelerated method (Nesterov, 1983) is a more efficient first-order algorithm than gradient descent, of which we will use the following form: starting with x0 = x1, yn = xn + n− 3 n (xn − xn−1), xn+1 = yn − sf(yn) (1.1) for n ≥ 1. It is shown that under abovementioned conditions, Nesterov’s accelerated method converges at a rate of O(n−2). Accelerated gradient method has been successful in training deep and recurrent neural networks (Sutskever et al., 2013) and is widely used in problems with machine learning background to avoid sophisticated second-order methods (Cotter et al., 2011; Hu et al., 2009; Ji & Ye, 2009). To provide more theorical understanding, an important research topic of Nesterov’s accelerated method is to find an explanation of the acceleration. On this topic, Nesterov’s method was studied via a continuous-time perspective (Su et al., 2014). They considered a curve x(t), introduced the ansatz xn ≈ x(n √ s) and substituted it to (1.1). Letting s → 0, they obtained the following differential equation. ẍ+ 3 t ẋ+ f(x) = 0. (1.2) The differential equation was used as a tool for analyzing and generalizing Nesterov’s scheme. Furthermore, this idea has been studied from different directions. A class of accelerated methods have been generated in continuous-time (Wibisono et al., 2016). ODE (1.2) can also be discretized directly using Runge-Kutta method to achieve acceleration (Zhang et al., 2018). Although many results have been achieved, the process of obtaining the differential equation (1.2) has not been rigorous, and the method is still time-consuming for large-scale problems. In this work, we give the precise order of the iterations of Nesterov’s accelerated method converging to solution of the differential equation (1.2) with initial conditions x(0) = x0, ẋ(0) = 0 (1.3) as step size s goes to zero. Inspired from this perspective, we present a new accelerated method to make this convergence faster. As we expected, iterations of the new method are closer to the solution x(t) of differential equation (1.2) than original Nesterov’s method. Moreover, we find the new method is more stable than original Nesterov’s method when step size is large. Based on abovementioned observations, we try to take advantage of the new method in more practical problems. We apply the new method to matrix completion problem. We combine the new method with proximal operator (Parikh & Boyd, 2014) into a new algorithm, which we call modified FISTA. We find that the new method performs better than FISTA (Beck & Teboulle, 2009) and acclerated proximal gradient method (Parikh & Boyd, 2014) because it can work with larger step sizes. This paper is organized as follows. In section 2, we prove that iterations of Nesterov’s accelerated method converge to solution of the differential equation (1.2). In section 3, we present a new method to make the convergence faster and show its better stablity through two simple examples. In section 4, we apply the new method to matrix completion problem. 2 A strict analysis of the relation between Nesterov’s method and its continuous-time limit We refer to x(t) as the solution of differential equation (1.2) with initial conditions (1.3). Existance and uniqueness of such solutions have been proved (Su et al., 2014). In this section, We give the order of the iterations of Nesterov’s accelerated method converging to x(t) as step sizes go to zero. For convenience, we substitute the first equation in Nesterov’s method (1.1) to the second one to get xn+1 = xn + n− 3 n (xn − xn−1)− s · f ( xn + n− 3 n (xn − xn−1) ) . We write s = h2 and rewrite the above recurrence relation as xn+1 = xn + n− 3 n (xn − xn−1)− h2 · f ( xn + n− 3 n (xn − xn−1) ) . (2.1) Inspired by the ansatz xn ≈ x(n √ s) (Su et al., 2014), we consider the convergence between xn and x(nh). More precisely, we show that for fixed time t, xn converges to x(t) as h goes to zero, where n = th . 2.1 Truncation error Firstly, we consider the following ‘truncation error’: L[x(t);h] =x(t+ h)− 2t− 3h t x(t) + t− 3h t x(t− h)+ h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) . (2.2) (2.2) is obtained from (2.1) by replacing xn+1, xn, xn−1 with x(t+h), x(t), x(t−h) and substituting the relation n = th . Our first result is the order of truncation error L[x(t);h]. Theorem 1. Assume f satisfies L-Lipschitz condition, and solution x(t) of the derived differential equation (1.2) has a continuous third derivative. For fixed time t, the truncation error (2.2) satisfies L[x(t);h] = O(h3). Theorem 1 shows the size of error caused by a single iteration when the starting point is just on x(t). Then we have to add up these errors to prove the convergence proporty we need. 2.2 Convergence theorem We now come to the convergence theorem. In this theorem, we give the precise order of the iterations of Nesterov’s method converging to solution of the derived differential equation. Theorem 2. Under conditions in Theorem 1, for fixed time t, xt/h converges to x(t) as h goes to zero at a rate of O(h ln 1h ) if x0 = x(0) and x1 = x(h). Theorem 2 coincides with derivation of ODE (1.2) (Su et al., 2014). 3 New accelerated method 3.1 Derivation of the new method and analysis of truncation error Inspired from the continuous-time perspective and our proof of the convergence from iterations of Nesterov’s method to its continuous-time limit, we present a new method to make this convergence faster. Precisely, the new method has a higher truncation order. We need one more step in our scheme than in Nesterov’s method to achieve higher truncation order in the following analysis, so we consider a recurrence relation with form 4∑ i=1 ( αi + βi n + γi n2 ) xn+2−i = −sf ( xn + n− 3 n (xn − xn−1) ) , (3.1) where {αi}, {βi} and {γi} are to be determined. Now we expand x(t− h) to first order. Calculation shows that f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =− hx(3)(t)− ( 3h t + 1 ) x(2)(t) + ( 3h t2 − 3 t ) x(1)(t) +O(h2). Substitute this expansion to truncation error L[x(t);h] = 4∑ i=1 ( αi + βih t + γih 2 t2 ) x(t+ (2− i)h) + h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) , and choose parameters appropriately to eliminate low-order terms, we get the following recurrence relation xn+1 = 10n2 + 9n+ 6 4n2 + 8n xn − 4n2 + 3 2n2 + 4n xn−1 + 2n− 1 4n+ 8 xn−2 − n 2n+ 4 sf ( 2n− 3 n xn − n− 3 n xn−1 ) . (3.2) Here we rewrite this scheme as Algorithm 1. Algorithm 1 The new method (3.2) Input: step size s Initial value: X2 = X1 = X0. (k− 1)th iteration (k ≥ 2). Compute Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1, Xk+1 = X − ( Yk − ks 2k + 4 f(Zk) ) . For truncation order of this new method, we have the following theorem. The abovementioned procedure is presented in Appendix A.4 detailedly, as proof of Theorem 3. Theorem 3. If f has continuous second order derivative, the first and second derivative are bounded, and x(t) has continuous fourth derivative, then for fixed t, truncation error of (3.2) satisfies L[x(tn);h] = O(h4). The convergence of the new method and x(t) can be proved similar to Theorem 2. 3.2 Advantage of the new method Since the new method has a truncation error of higher order than original Neaterov’s method, the iterations of the new method converge to the differential equation (1.2) when those of original Nesterov’s method diverge. In another word, the new method is more stable for large step size. We present two numerical results in Figure 1 to confirm it. Quadratic. F (x) = xTAx is a strongly convex function, in which x ∈ R2 and A is a 2× 2 matrix. Linear regression. F (x) = n∑ i=1 (yi −wTi x)2, where n is the number of samples and (wi, yi) is the ith sample. In these examples, iterations of the new method converge to the minimum point, while those of original Nesterov’s method diverge, which confirms that the new method is more stable for large step size. 3.3 Absolute stability of Nesterov’s method and the new method In this subsection, we explain the better stability of the new method with absolute stability theory. Firstly, recall the scheme of our new method xn+1 = 10n2 + 9n+ 6 4n2 + 8n xn − 4n2 + 3 2n2 + 4n xn−1 + 2n− 1 4n+ 8 xn−2 − n 2n+ 4 sf ( 2n− 3 n xn − n− 3 n xn−1 ) . We use linear approximation f ( xn + n− 3 n (xn − xn−1) ) = ∇F ( xn + n− 3 n (xn − xn−1) ) ≈ ∇2F · ( xn + n− 3 n (xn − xn−1) ) , and the characteristic equation of this finite scheme is approximately λ3− ( 10n2 + 9n+ 6 4n2 + 8n − s · ∇2F · 2n 2 − 3n 2n2 + 4n ) λ2+ ( 4n2 + 3 2n2 + 4n − s · ∇2F · n 2 − 3n 2n2 + 4n ) λ− 2n− 1 4n+ 8 = 0. For large n, we can ignore the high order terms and the characteristic equation becomes λ3 − ( 5 2 − s · ∇2F · ) λ2 + ( 2− s 2 · ∇2F ) λ− 1 2 = 0. According to the absolute stability theory, the numerical stability of Nesterov’s scheme with respect to accumulated roundoff error is equivalent to this: all the roots of the characteristic equation lie in the unit circle (Leader, 2004). Noticing that the left hand of the equation can be factorized to( λ− 1 2 )( λ2 − (2− s · ∇2F )λ+ 1 ) , the largest modulu of the roots is 1 when 0 ≤ s · ∇2F ≤ 4, and the absolutely stable region of the new method is s · ∇2F ∈ [0, 4]. When s·∇2F lies in the absoletely stable region, the related theory guarantees that the error caused by every iteration will not be magnified as the iteration number increases. To make the analysis more precise, we should consider the difference of the scheme between iterations caused by different n. We define the transfer matrix Pn = ( 10n2+9n+6 4n2+8n − s · ∇ 2F · 2n 2−3n 2n2+4n ) − ( 4n2+3 2n2+4n − s · ∇ 2F · n 2−3n 2n2+4n ) 2n−1 4n+8 1 0 0 0 1 0 and Qn = PnPn−1 · · ·P1. Error analysis shows that if the largest modulu of eigenvalues of Qn goes to zero, then error caused by iterations will be eliminated as the iteration number increases. Figure 2 presents the largest module of eigenvalues of Qn for different values of s · ∇2F . From the experiment we can see that the above condition is satisfied. We then apply the same method to Nesterov’s method discussed in (Su et al., 2014) and conclude that the absolutely stable region of Nesterov’s method is [0, 43 ]. According to the above analysis, the absolutely stable region of the new method is four times as large as Nesterov’s method, so the new method is more stable, and we can choose larger step sizes to achieve faster convergence. 4 Application to matrix completion problem: modified FISTA Our theory and numerical results show that the new method is more stable than original Nestrov’s method. So we can choose larger step size for new method and convergence to the optimal solution can be faster, compared with original Nesterov’s method. In this section we apply the new method to matrix completion problem. We present a new algorithm which can be viewed as a modification of the well-konwn fast iterative shrinkage-thresholding algorithm (FISTA) (Beck & Teboulle, 2009). The performance of modified FISTA can also confirm the advantage of the new method. For matrix completion problem there exists a ‘true’ low rank matrix M . We are given some entries of M and asked to fill missing entries. There have been various algorithms to solve such problem (Candès & Recht, 2009; Keshavan et al., 2010). Besides, it is proposed that matrix completion can be transformed to the following unconstrained optimization problem (Mazumder et al., 2010) minF (X) = 1 2 ∥Xobs −Mobs∥2 + λ∥X∥∗. (4.1) Notice that F (X) is composed of a smooth term and a non-smooth term, so gradient-based algorithms cannot be used directly. Proximal gradient algorithms (Parikh & Boyd, 2014) are widely used in such composite optimization problems, and fast iterative shrinkage-thresholding algorithm (FISTA) is a successful algorithm. Moreover, FISTA has been extended to matrix completion case (Ji & Ye, 2009). For convenience, we set G(X) = 12∥Xobs − Mobs∥2, H(X) = λ∥X∥∗, and g(X) = ∇G(X). The idea of FISTA builds on Nesterov’s method. We also apply acclerated proximal gradient method (Parikh & Boyd, 2014) for our numerical experiment, which is composed of Nesterov’s method and proximal gradient descent. These two algorithms are presented in Appendix A.5. We find the performances of them are similar in our experiments. Our contribution is the third method (Algorithm 2), the new method (3.2) combined with proximal operator, which we call modified FISTA. Algorithm 2 Modified FISTA Input: step size s Initial value: X2 = X1 = X0 ∈ M100. (k− 1)th iteration (k ≥ 2). Compute Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1, Xk+1 = argmin X { 1 2 · 2k + 4 ks ∥∥∥∥X − (Yk − ks2k + 4g(Zk) )∥∥∥∥2 + λ∥X∥∗ } . Notice that the minimizing problems in interations of above three algorithms can be solved directly by singular value decomposition (Cai & Candès, 2010). We take experiments on a simulated data set. We use fixed step sizes in the above three algorithms, and the performances are presented in Figure 3. We find empirically that for all methods, convergence is faster when step size is larger, so we choose the largest step sizes for all methods to compare their fastest convergence speed. Through experiments, we find the largest step size that makes modified FISTA convergent is 4.1 (accurate to one decimal place), while those for the first two algorithms are both 1.3. We also compare performances of the three methods with step sizes reduced from the largest in equal proportion. We find that when step sizes are chosen to be the largest or reduced from the largest in equal proportion (80%, 50%, 10%), convergence of modified FISTA is faster than the other two methods. We also combine the three methods with backtracking (Beck & Teboulle, 2009) to choose step sizes automatically. We present modified FISTA with backtracking as Algorithm 3, and the other two algorithms are similar. Performances of the three algorithms with backtracking on abovementioned data set are presented in Figure 4. Convergence of modified FISTA is faster than the other two methods. Moreover, we find that the final step size of modified FISTA is larger. 5 Discussion In this paper we prove that iterations of Nesterov’s accelerated method converge to solution of the derived differential equation as step sizes go to zero. We present a new accelerated method to make this convergence faster. We use numerical results to show that the new method is more stable, especially for large step sizes, and explan it using the order of truncation error. We then apply the new method to matrix completion problem and present a new algorithm which we call modified FISTA. Numerical experiments show that modified FISTA performs better than existing algorithms based on Nesterov’s acceleration because it can work with larger step sizes. We will also combine our new method with stochastic gradient-based algorithms and apply the new method to deep networks in the future. Algorithm 3 Modified FISTA with backtracking Input: some β < 1 Initial value. X2 = X1 = X0 ∈ M100, step size s2. (k− 1)th iteration (k ≥ 2). Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1. Find the smallest positive integer ik+1 such that with s = βik+1sk F (X̃) < F (Yk) + ⟨ X̃ − Yk, g(Zk) ⟩ + 1 2 · 2k + 4 ks ∥X̃ − Yk∥2, where X̃ = argmin X { 1 2 · 2k + 4 ks ∥∥∥∥X − (Yk − ks2k + 4g(Zk) )∥∥∥∥2 + λ∥X∥∗ } . Set sk+1 = βik+1sk and compute Xk+1 = X̃. Our work shows that for an accelerated gradient method, the rate at which it converges to the derived differential equation is possibly related to its property as an optimization algorithm. We think this work suggests that more consideration should be given to the corresponding differential equations when studying optimization algorithms. A Appendix A.1 Proof of Theorem 1 Theorem 1. Assume f satisfies L-Lipschitz condition, and solution x(t) of the derived differential equation (1.2) has a continuous third derivative. For fixed time t, the truncation error (2.2) satisfies L[x(t);h] = O(h3). (A.1) Proof. Notice that x(t− h) = x(t) +O(h). Substitute this equation to the last term of L[x(t);h] to get f ( x(t) + t− 3h t (x(t)− x(t− h)) ) = f ( x(t) + t− 3h t · O(h) ) . Since f satisfies L-Lipschitz condition, we know f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =f(x(t)) +O(h) =− ẍ(t)− 3 t ẋ(t) +O(h). To get the second equality, we substitute the differential equation (1.2). Then we expend the first and third terms of L[x(t);h] to third order to get x(t+ h) = x(t) + hx(1)(t) + h2 2 x(2)(t) +O(h3), x(t− h) = x(t)− hx(1)(t) + h 2 2 x(2)(t) +O(h3). Substitute these three equations to (2.2), we have L[x(t);h] = O(h3). Remark 1. (A.1) can also be written as |L[x(t);h]| ≤ M1h3, where M1 depends on sups≤t |x(1)(s)| and sups≤t |x(3)(s)|. Remark 2. Theorem 1 deals with the problem for fixed time t. To finish the proof of the convergence, we have to consider the situation that tn = nh, where n ≥ 1 is fixed. We set a fixed time t0 and assume that tn = nh < t0. Since x(t) has a continuous third derivative, x(t) and its first to third derivative are bounded in [0, t0]. We replace time t in the above proof by tn and expend the terms of (2.2). Now the term −3h 3 2tn x(2)(tn) obtained from the expansion of x(tn−1) cannot be viewed as O(h3), but there exists M2 > 0 such that ∣∣∣∣−3h32tn x(2)(tn) ∣∣∣∣ ≤ M2h2n . As a consequence, we have |L[x(tn);h]| ≤ M1h3 +M2 h2 n , (A.2) where M1 and M2 rely on t0. A.2 Two lemmas for Theorem 2 For the proof of Theorem 2, we need the following two lemmas. Lemma 1. (Holte, 2009) For constant α, β > 0 and positive sequence {ηn}n≥0 satisfying ηn ≤ β + α n−1∑ i=0 ηi, ∀n > 0, the following inequality holds ηn ≤ eαn(β + αη0). The above lemma is a classic result and refered to as discrete Gronwall inequality. Lemma 2. We define matrices Cn and Dn,l as Cn = ( 2n−1 n+1 − n−2 n+1 1 0 ) , Dn,l = CnCn−1 · · ·Cn−l+1, where n ≥ 0 and 0 < l ≤ n+ 1. In addition, we set Dn,0 = I2. Then there exist positive constants M, M3 such that for all n, the following two inequalities hold, where the matrix norm is 2-norm. sup 0≤l≤n+1 ∥Dn,l∥ ≤ Mn, Dn,n+1 ≤ M3. (A.3) Proof. Since C2 = ( 1 0 1 0 ) , we notice that when n ≥ 2, Dn,n−1 = ( 1 0 1 0 ) , Dn,n = ( 1 2 1 2 1 2 1 2 ) , Dn,n+1 = ( 0 1 0 1 ) , having nothing to do with the value of n. So it is obvious that there exists M3 to make (A.3) holds and M4 > 0 such that for all n < 2 or n ≥ 2, l > n− 2 or l = 0, ∥Dn,l∥ ≤ M4n. (A.4) Then we consider the condition when n ≥ 2, 0 < l ≤ n− 2. Notice that Cn = ( 1 1 1 0 )( 1 1 0 n−2n+1 )( 1 1 1 0 )−1 . For convenience, we write P = ( 1 1 1 0 ) . Assume we have alreagy got Dn,l = P ( 1 an,l 0 bn,l ) P−1 satisfying 0 < an,l ≤ l, 0 < bn,l ≤ 1, then since Dn,l+1 = Dn,lCn−l, and 0 ≤ n−l−2n−l+1 < 1, Dn,l+1 has the same form Dn,l+1 = P ( 1 an,l+1 0 bn,l+1 ) P−1, satisfying 0 < an,l+1 ≤ l + 1, 0 < bn,l ≤ 1. Then for fixed n, induce from l = 1, we get Dn,l = PD̃n,lP −1 ≜ P ( 1 an,l 0 bn,l ) P−1, satisfying 0 < an,l ≤ l ≤ n, 0 < bn,l ≤ 1, (A.5) for all n ≥ 2, 0 < l ≤ n− 2. Then we can estimate ∥Dn,l∥. Notice that D̃n,lD̃ T n,l = ( 1 + a2n,l an,lbn,l an,lbn,l a 2 n,l ) . The eigenvalues of this matrix are λ1,2 = 1 + a2n,l + b 2 n,l ± √ (1 + a2n,l + b 2 n,l) 2 − 4b4 2 . Combining this representation with (A.5), we get the estimation ∥D̃n,l∥ = √ max{|λ1|, |λ2|} ≤ √ 1 + a2n,l + b 2 n,l ≤ n+ 2. So there exists M5 > 0, such that for all n ≥ 2, 0 < l ≤ n− 2, inequality ∥Dn,l∥ ≤ M5n (A.6) holds. Combining (A.4) with (A.6), we finish the proof. A.3 Proof of Theorem 2 Theorem 2. Under conditions in Theorem 1, for fixed time t, xt/h converges to x(t) as h goes to zero at a rate of O(h ln 1h ) if x0 = x(0) and x1 = x(h). Proof. In this proof, we first calculate the error caused by a single iteration, which can be divided into an accumulation term and a truncation term. Then we use the estimation given by Theorem 1 and apply discrete Gronwall inequality to prove the convergence. Recall the recurrence relation xn+1 = xn + n− 3 n (xn − xn−1)− h2 · f ( xn + n− 3 n (xn − xn−1) ) and the definition of truncation error x(tn+1) = x(tn) + n− 3 n (x(tn)− x(tn−1))− h2f ( x(tn) + n− 3 n (x(tn)− x(tn−1)) ) + L[x(tn);h], where tn = nh. Subtract the above two equations, and introduce overall error en = x(tn)− xn, we have en+1 = 2n− 3 n en − n− 3 n en−1 − h2bn−1 + L[x(tn);h], which can also be written as en+2 − 2n− 1 n+ 1 en+1 + n− 2 n+ 1 en = −h2bn + L[x(tn+1);h], (A.7) where bn = f ( 2n− 1 n+ 1 xn+1 − n− 2 n+ 1 xn ) − f ( 2n− 1 n+ 1 x(tn+1)− n− 2 n+ 1 x(tn) ) . (A.8) We will also use the notation b∗n = − en+2 − 2n−1n+1 en+1 + n−2 n+1en h2 . Then we rewrite (A.7) into a form that is convenient for recurrence. We set En = ( en+1 en ) , Cn = ( 2n−1 n+1 − n−2 n+1 1 0 ) , Bn = ( −h2b∗n 0 ) . Then (A.7) can be written as En+1 = CnEn +Bn. By recursive method, we have En = Cn−1 · · ·C0E0 + n∑ l=1 Cn−1 · · ·Cn−l+1Bn−l. With the notations introduced in Lemma 2, this equation can be written as En = Dn−1,nE0 + n∑ l=1 Dn−1,l−1Bn−l. (A.9) Now we need to estimate ∥Bn∥. Since f satisfies L-Lipschitz condition, from (A.8) we have |bn| ≤ L ( 2n− 1 n+ 1 |en+1|+ n− 2 n+ 1 |en| ) ≤ L (2|en+1|+ |en|) ≤ 3L∥En∥. and ∥Bn∥ ≤ 3h2L∥En∥+ L[x(tn+1);h]. (A.10) Take norm on both sides of (A.9) and substitute (A.10) and conclusion of Lemma 2, we have the following estimation ∥En∥ ≤ M3∥E0∥+M(n− 1) n−1∑ l=0 ( 3h2L∥El∥+ L[x(tl+1);h] ) ≤ M3∥E0∥+ 3Mnh2L n−1∑ l=0 ∥El∥+Mn n−1∑ l=0 L[x(tl+1);h]. (A.11) Now we deal with truncation errors. Recall (A.2) in remark of Theorem 1 |L[x(tl);h]| ≤ M1h3 +M2 h2 l . Take sum to obtain n−1∑ l=0 |L[x(tl+1);h]| ≤ nM1h3 +M2h2 n−1∑ l=0 1 l + 1 . (A.12) Notice the classic inequality n∑ i=1 1 i ≤ lnn+Me, where Me refers to a positive constant. Substitute it to (A.12), we have n−1∑ l=0 |L[x(tl+1);h]| ≤ nM1h3 +M2h2(lnn+Me). Substitute this inequality to (A.11), we get a control of ∥En∥ ∥En∥ ≤ M3∥E0∥+ 3Mnh2L n−1∑ l=0 ∥El∥+MM1n2h3 +MM2Menh2 +MM2nh2 lnn Using discrete Gronwall inequality, we have ∥En∥ ≤ e3Mn 2h2L ( M3∥E0∥+MM1n2h3 +MM2Menh2 +MM2nh2 lnn+ 3Mnh2L∥E0∥ ) . Then for fixed t, we choose n = th to get ∥Et/h∥ ≤ e3Mt 2L ( (M3 + 3MthL)∥E0∥+ (MM1t2 +MM2Met)h+MM2th ln t h ) . Notice that lim h→0 h ln t h = 0, so if E0 = 0, then the vector form of overall error Et/h satisfies lim h→0 ∥Et/h∥ = 0. A.4 Proof of Theorem 3 Theorem 3. If f has continuous second order derivative, the first and second derivative are bounded, and x(t) has continuous fourth derivative, then for fixed t, truncation error of (3.2) satisfies L[x(t);h] = O(h4). Proof. Recall the proof of Throrem 1. Now we expand x(t− h) to first order x(t− h) = x(t) + hx(1)(t) +O(h2). Then we have f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =f ( x(t) + ( 1− 3h t ) (hx(1)(t) +O(h2)) ) =f ( x(t) + hx(1)(t) +O(h2) ) =f ( x(t) + hx(1)(t) ) +O(h2). We now expand f : f ( x(t) + t− 3h t (x(t)− x(t− h)) ) = f(x(t)) + hx(1)(t)f (1)(x(t)) +O(h2). To do this, we need f has continuous second derivative and the second derivative is bounded. Take derivetive on both sides of differential equation ẍ+ 3 t ẋ+ f(x) = 0, we have (f(x(t))) ′ = −x(3)(t)− 3 t x(2)(t) + 3 t2 x(1)(t). So f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =− hx(3)(t)− ( 3h t + 1 ) x(2)(t) + ( 3h t2 − 3 t ) x(1)(t) +O(h2). (A.13) Expand x(t+ h), x(t− h), x(t− 2h) to the third order, we have( α1 + β1h t + γ1h 2 t2 ) x(t+ h) = ( α1 + β1h t + γ1h 2 t2 ) [ x(t) + hx(1)(t) + h2 2 x(2)(t) + h3 6 x(3)(t) +O(h4) ] ,( α3 + β3h t + γ3h 2 t2 ) x(t− h) = ( α3 + β3h t + γ3h 2 t2 ) [ x(t)− hx(1)(t) + h 2 2 x(2)(t)− h 3 6 x(3)(t) +O(h4) ] ,( α4 + β4h t + γ4h 2 t2 ) x(t− 2h) = ( α4 + β4h t + γ4h 2 t2 ) [ x(t)− 2hx(1)(t) + 2h2x(2)(t)− 4h 3 3 x(3)(t) +O(h4) ] . Substitute these three equations and (A.13) to truncation error of recurrence relation (3.1) L[x(t);h] = 4∑ i=1 ( αi + βih t + γih 2 t2 ) x(t+ (2− i)h) + h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) , then simple calculation shows that terms with order less than four will be eliminated if we choose coefficients according to the following equations α1 = 2 α2 = −5 α3 = 4 α4 = −1 , β1 = 9 2 − k β2 = −6 + 3k β3 = 3 2 − 3k β4 = k , γ1 = m1 γ2 = − 3m1 +m2 + 3 2 γ3 = m2 γ4 = m1 −m2 + 3 2 , where k, m1, m2 can be chosen randomly. Notice that coefficients of recurrence relation (3.2) satisfy above equations. A.5 Algorithms Algorithm 4 FISTA Input: step size s Initial value: Y1 = X0 ∈ M100, t1 = 1. kth iteration (k ≥ 1). Compute Xk = argmin X { 1 2s ∥X − (Yk − sg(Yk))∥2 + λ∥X∥∗ } , tk+1 = 1 + √ 1 + 4t2k 2 , Yk+1 = Xk + tk − 1 tk+1 (Xk −Xk−1). Algorithm 5 Accelerated proximal gradient method Input: step size s Initial value: X1 = X0 ∈ M100. kth iteration (k ≥ 1). Compute Yk = Xk−1 + k − 3 k (Xk−1 −Xk−2), Xk+1 = argmin X { 1 2s ∥X − (Yk − sg(Yk))∥2 + λ∥X∥∗ } . A.6 Details about Numerical Experiments in Section 4 Here we produce some details for our numerical experiments in Section 4. Our experiments are taken on a simulated data set. Firstly, we generated the ‘true’ low rank matrix M . To do this, we generate a random matrix M0. Entries of M0 are independent and uniformly distributed on (0, 20). Then we compute the singular value decomposition of M0, that is, M0 = UΣV T. After that, we set M = UΣ0V T, where Σ0 is a diagonal matrix with only three nonzero diagonal elements. It is not difficult to prove that M has rank 3. Secondly, we generate the observation set. For every row of M , we choose randomly ten entrys to be observed. As a consequence, 10% entries are observed in total. After data generation step, we apply the abovementioned algorithms (accelerated proximal gradient method, FISTA and our modified FISTA) with fixed step sizes and backtracking to this data set. The parameter of the loss function (4.1) is λ = 0.005. For initial point, we simply choose the zero matrix (every entry equals to zero). For backtracking, we set the initial step size as 10 and the decay factor β = 0.1.
1. What is the focus of the paper regarding ODE discretization and acceleration optimization? 2. What are the strengths of the paper, particularly in terms of its novel truncation error analysis? 3. What are the weaknesses of the paper, especially regarding its claims of improved convergence rate and stability? 4. Do you have any concerns about the numerical evidence provided for the matrix completion problem? 5. How could the paper be improved, both theoretically and practically, to strengthen its contributions?
Review
Review Review: This paper refines the the truncation error analysis for discretizing the ODE to obtain accelerated optimization method. The truncation results include higher order term. Built upon the analysis, the authors propose a new method which is claimed to be more stable for large step size and converges faster. Numerical evidence on matrix completion problem is provided. Pros: The truncation error analysis is new. Overall, the paper is clearly written. Cons: The biggest concern that I have with the paper is that it is unclear to me whether the convergence rate is really improved or not. From my understanding, truncation error is different from the convergence rate. What does Theorem 3 really imply here? It seems to me that Theorem 3 does not guarantee that there is an improvement in the convergence rate. A rigorous quantification of the convergence rate needs to be provided to justify the claim "the proposed method converges faster." Even the claim on "stability" is not well justified. Two simple examples do not provide that much evidence here. This paper does not provide enough details such that the numerical results can be easily reproduced. Suggestions for improvements: It will significantly strengthen the paper if the authors can provide more theoretical justifications for the claim that their proposed method is faster and more stable. It is also important to clarify the true implications of the truncation error analysis on the algorithm performance.
ICLR
Title A new accelerated gradient method inspired by continuous-time perspective Abstract Nesterov’s accelerated method are widely used in problems with machine learning background including deep learning. To give more insight about the acceleration phenomenon, an ordinary differential equation was obtained from Nesterov’s accelerated method by taking step sizes approaching zero, and the relationship between Nesterov’s method and the differential equation is still of research interest. In this work, we give the precise order of the iterations of Nesterov’s accelerated method converging to the solution of derived differential equation as step sizes go to zero. We then present a new accelerated method with higher order. The new method is more stable than ordinary method for large step size and converges faster. We further apply the new method to matrix completion problem and show its better performance through numerical experiments. 1 Introduction Optimization is a core component of statistic and machine learning problems. Recently, gradientbased algorithms are widely used in such optimization problems due to its simplicity and efficiency for large-scale situations. For solving convex optimization problem min x∈Rd F (x), where F (x) is convex and sufficiently smooth, a classical first-order method is gradient descent. We assume that f(x) = ∇F (x) satisfies L-Lipschitz condition, that is, there exists constant L such that ∥f(x)− f(y)∥ ≤ L∥x− y∥, ∀x, y. Under these conditions, gradient descent achieves a convergence rate of O(n−1), i.e., ∥F (xn) − F (x∗)∥ decreases to zero at a rate of O(n−1), where xn denotes the nth iteration and x∗ denotes the minimum point of F (x) in Rd. Nesterov’s accelerated method (Nesterov, 1983) is a more efficient first-order algorithm than gradient descent, of which we will use the following form: starting with x0 = x1, yn = xn + n− 3 n (xn − xn−1), xn+1 = yn − sf(yn) (1.1) for n ≥ 1. It is shown that under abovementioned conditions, Nesterov’s accelerated method converges at a rate of O(n−2). Accelerated gradient method has been successful in training deep and recurrent neural networks (Sutskever et al., 2013) and is widely used in problems with machine learning background to avoid sophisticated second-order methods (Cotter et al., 2011; Hu et al., 2009; Ji & Ye, 2009). To provide more theorical understanding, an important research topic of Nesterov’s accelerated method is to find an explanation of the acceleration. On this topic, Nesterov’s method was studied via a continuous-time perspective (Su et al., 2014). They considered a curve x(t), introduced the ansatz xn ≈ x(n √ s) and substituted it to (1.1). Letting s → 0, they obtained the following differential equation. ẍ+ 3 t ẋ+ f(x) = 0. (1.2) The differential equation was used as a tool for analyzing and generalizing Nesterov’s scheme. Furthermore, this idea has been studied from different directions. A class of accelerated methods have been generated in continuous-time (Wibisono et al., 2016). ODE (1.2) can also be discretized directly using Runge-Kutta method to achieve acceleration (Zhang et al., 2018). Although many results have been achieved, the process of obtaining the differential equation (1.2) has not been rigorous, and the method is still time-consuming for large-scale problems. In this work, we give the precise order of the iterations of Nesterov’s accelerated method converging to solution of the differential equation (1.2) with initial conditions x(0) = x0, ẋ(0) = 0 (1.3) as step size s goes to zero. Inspired from this perspective, we present a new accelerated method to make this convergence faster. As we expected, iterations of the new method are closer to the solution x(t) of differential equation (1.2) than original Nesterov’s method. Moreover, we find the new method is more stable than original Nesterov’s method when step size is large. Based on abovementioned observations, we try to take advantage of the new method in more practical problems. We apply the new method to matrix completion problem. We combine the new method with proximal operator (Parikh & Boyd, 2014) into a new algorithm, which we call modified FISTA. We find that the new method performs better than FISTA (Beck & Teboulle, 2009) and acclerated proximal gradient method (Parikh & Boyd, 2014) because it can work with larger step sizes. This paper is organized as follows. In section 2, we prove that iterations of Nesterov’s accelerated method converge to solution of the differential equation (1.2). In section 3, we present a new method to make the convergence faster and show its better stablity through two simple examples. In section 4, we apply the new method to matrix completion problem. 2 A strict analysis of the relation between Nesterov’s method and its continuous-time limit We refer to x(t) as the solution of differential equation (1.2) with initial conditions (1.3). Existance and uniqueness of such solutions have been proved (Su et al., 2014). In this section, We give the order of the iterations of Nesterov’s accelerated method converging to x(t) as step sizes go to zero. For convenience, we substitute the first equation in Nesterov’s method (1.1) to the second one to get xn+1 = xn + n− 3 n (xn − xn−1)− s · f ( xn + n− 3 n (xn − xn−1) ) . We write s = h2 and rewrite the above recurrence relation as xn+1 = xn + n− 3 n (xn − xn−1)− h2 · f ( xn + n− 3 n (xn − xn−1) ) . (2.1) Inspired by the ansatz xn ≈ x(n √ s) (Su et al., 2014), we consider the convergence between xn and x(nh). More precisely, we show that for fixed time t, xn converges to x(t) as h goes to zero, where n = th . 2.1 Truncation error Firstly, we consider the following ‘truncation error’: L[x(t);h] =x(t+ h)− 2t− 3h t x(t) + t− 3h t x(t− h)+ h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) . (2.2) (2.2) is obtained from (2.1) by replacing xn+1, xn, xn−1 with x(t+h), x(t), x(t−h) and substituting the relation n = th . Our first result is the order of truncation error L[x(t);h]. Theorem 1. Assume f satisfies L-Lipschitz condition, and solution x(t) of the derived differential equation (1.2) has a continuous third derivative. For fixed time t, the truncation error (2.2) satisfies L[x(t);h] = O(h3). Theorem 1 shows the size of error caused by a single iteration when the starting point is just on x(t). Then we have to add up these errors to prove the convergence proporty we need. 2.2 Convergence theorem We now come to the convergence theorem. In this theorem, we give the precise order of the iterations of Nesterov’s method converging to solution of the derived differential equation. Theorem 2. Under conditions in Theorem 1, for fixed time t, xt/h converges to x(t) as h goes to zero at a rate of O(h ln 1h ) if x0 = x(0) and x1 = x(h). Theorem 2 coincides with derivation of ODE (1.2) (Su et al., 2014). 3 New accelerated method 3.1 Derivation of the new method and analysis of truncation error Inspired from the continuous-time perspective and our proof of the convergence from iterations of Nesterov’s method to its continuous-time limit, we present a new method to make this convergence faster. Precisely, the new method has a higher truncation order. We need one more step in our scheme than in Nesterov’s method to achieve higher truncation order in the following analysis, so we consider a recurrence relation with form 4∑ i=1 ( αi + βi n + γi n2 ) xn+2−i = −sf ( xn + n− 3 n (xn − xn−1) ) , (3.1) where {αi}, {βi} and {γi} are to be determined. Now we expand x(t− h) to first order. Calculation shows that f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =− hx(3)(t)− ( 3h t + 1 ) x(2)(t) + ( 3h t2 − 3 t ) x(1)(t) +O(h2). Substitute this expansion to truncation error L[x(t);h] = 4∑ i=1 ( αi + βih t + γih 2 t2 ) x(t+ (2− i)h) + h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) , and choose parameters appropriately to eliminate low-order terms, we get the following recurrence relation xn+1 = 10n2 + 9n+ 6 4n2 + 8n xn − 4n2 + 3 2n2 + 4n xn−1 + 2n− 1 4n+ 8 xn−2 − n 2n+ 4 sf ( 2n− 3 n xn − n− 3 n xn−1 ) . (3.2) Here we rewrite this scheme as Algorithm 1. Algorithm 1 The new method (3.2) Input: step size s Initial value: X2 = X1 = X0. (k− 1)th iteration (k ≥ 2). Compute Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1, Xk+1 = X − ( Yk − ks 2k + 4 f(Zk) ) . For truncation order of this new method, we have the following theorem. The abovementioned procedure is presented in Appendix A.4 detailedly, as proof of Theorem 3. Theorem 3. If f has continuous second order derivative, the first and second derivative are bounded, and x(t) has continuous fourth derivative, then for fixed t, truncation error of (3.2) satisfies L[x(tn);h] = O(h4). The convergence of the new method and x(t) can be proved similar to Theorem 2. 3.2 Advantage of the new method Since the new method has a truncation error of higher order than original Neaterov’s method, the iterations of the new method converge to the differential equation (1.2) when those of original Nesterov’s method diverge. In another word, the new method is more stable for large step size. We present two numerical results in Figure 1 to confirm it. Quadratic. F (x) = xTAx is a strongly convex function, in which x ∈ R2 and A is a 2× 2 matrix. Linear regression. F (x) = n∑ i=1 (yi −wTi x)2, where n is the number of samples and (wi, yi) is the ith sample. In these examples, iterations of the new method converge to the minimum point, while those of original Nesterov’s method diverge, which confirms that the new method is more stable for large step size. 3.3 Absolute stability of Nesterov’s method and the new method In this subsection, we explain the better stability of the new method with absolute stability theory. Firstly, recall the scheme of our new method xn+1 = 10n2 + 9n+ 6 4n2 + 8n xn − 4n2 + 3 2n2 + 4n xn−1 + 2n− 1 4n+ 8 xn−2 − n 2n+ 4 sf ( 2n− 3 n xn − n− 3 n xn−1 ) . We use linear approximation f ( xn + n− 3 n (xn − xn−1) ) = ∇F ( xn + n− 3 n (xn − xn−1) ) ≈ ∇2F · ( xn + n− 3 n (xn − xn−1) ) , and the characteristic equation of this finite scheme is approximately λ3− ( 10n2 + 9n+ 6 4n2 + 8n − s · ∇2F · 2n 2 − 3n 2n2 + 4n ) λ2+ ( 4n2 + 3 2n2 + 4n − s · ∇2F · n 2 − 3n 2n2 + 4n ) λ− 2n− 1 4n+ 8 = 0. For large n, we can ignore the high order terms and the characteristic equation becomes λ3 − ( 5 2 − s · ∇2F · ) λ2 + ( 2− s 2 · ∇2F ) λ− 1 2 = 0. According to the absolute stability theory, the numerical stability of Nesterov’s scheme with respect to accumulated roundoff error is equivalent to this: all the roots of the characteristic equation lie in the unit circle (Leader, 2004). Noticing that the left hand of the equation can be factorized to( λ− 1 2 )( λ2 − (2− s · ∇2F )λ+ 1 ) , the largest modulu of the roots is 1 when 0 ≤ s · ∇2F ≤ 4, and the absolutely stable region of the new method is s · ∇2F ∈ [0, 4]. When s·∇2F lies in the absoletely stable region, the related theory guarantees that the error caused by every iteration will not be magnified as the iteration number increases. To make the analysis more precise, we should consider the difference of the scheme between iterations caused by different n. We define the transfer matrix Pn = ( 10n2+9n+6 4n2+8n − s · ∇ 2F · 2n 2−3n 2n2+4n ) − ( 4n2+3 2n2+4n − s · ∇ 2F · n 2−3n 2n2+4n ) 2n−1 4n+8 1 0 0 0 1 0 and Qn = PnPn−1 · · ·P1. Error analysis shows that if the largest modulu of eigenvalues of Qn goes to zero, then error caused by iterations will be eliminated as the iteration number increases. Figure 2 presents the largest module of eigenvalues of Qn for different values of s · ∇2F . From the experiment we can see that the above condition is satisfied. We then apply the same method to Nesterov’s method discussed in (Su et al., 2014) and conclude that the absolutely stable region of Nesterov’s method is [0, 43 ]. According to the above analysis, the absolutely stable region of the new method is four times as large as Nesterov’s method, so the new method is more stable, and we can choose larger step sizes to achieve faster convergence. 4 Application to matrix completion problem: modified FISTA Our theory and numerical results show that the new method is more stable than original Nestrov’s method. So we can choose larger step size for new method and convergence to the optimal solution can be faster, compared with original Nesterov’s method. In this section we apply the new method to matrix completion problem. We present a new algorithm which can be viewed as a modification of the well-konwn fast iterative shrinkage-thresholding algorithm (FISTA) (Beck & Teboulle, 2009). The performance of modified FISTA can also confirm the advantage of the new method. For matrix completion problem there exists a ‘true’ low rank matrix M . We are given some entries of M and asked to fill missing entries. There have been various algorithms to solve such problem (Candès & Recht, 2009; Keshavan et al., 2010). Besides, it is proposed that matrix completion can be transformed to the following unconstrained optimization problem (Mazumder et al., 2010) minF (X) = 1 2 ∥Xobs −Mobs∥2 + λ∥X∥∗. (4.1) Notice that F (X) is composed of a smooth term and a non-smooth term, so gradient-based algorithms cannot be used directly. Proximal gradient algorithms (Parikh & Boyd, 2014) are widely used in such composite optimization problems, and fast iterative shrinkage-thresholding algorithm (FISTA) is a successful algorithm. Moreover, FISTA has been extended to matrix completion case (Ji & Ye, 2009). For convenience, we set G(X) = 12∥Xobs − Mobs∥2, H(X) = λ∥X∥∗, and g(X) = ∇G(X). The idea of FISTA builds on Nesterov’s method. We also apply acclerated proximal gradient method (Parikh & Boyd, 2014) for our numerical experiment, which is composed of Nesterov’s method and proximal gradient descent. These two algorithms are presented in Appendix A.5. We find the performances of them are similar in our experiments. Our contribution is the third method (Algorithm 2), the new method (3.2) combined with proximal operator, which we call modified FISTA. Algorithm 2 Modified FISTA Input: step size s Initial value: X2 = X1 = X0 ∈ M100. (k− 1)th iteration (k ≥ 2). Compute Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1, Xk+1 = argmin X { 1 2 · 2k + 4 ks ∥∥∥∥X − (Yk − ks2k + 4g(Zk) )∥∥∥∥2 + λ∥X∥∗ } . Notice that the minimizing problems in interations of above three algorithms can be solved directly by singular value decomposition (Cai & Candès, 2010). We take experiments on a simulated data set. We use fixed step sizes in the above three algorithms, and the performances are presented in Figure 3. We find empirically that for all methods, convergence is faster when step size is larger, so we choose the largest step sizes for all methods to compare their fastest convergence speed. Through experiments, we find the largest step size that makes modified FISTA convergent is 4.1 (accurate to one decimal place), while those for the first two algorithms are both 1.3. We also compare performances of the three methods with step sizes reduced from the largest in equal proportion. We find that when step sizes are chosen to be the largest or reduced from the largest in equal proportion (80%, 50%, 10%), convergence of modified FISTA is faster than the other two methods. We also combine the three methods with backtracking (Beck & Teboulle, 2009) to choose step sizes automatically. We present modified FISTA with backtracking as Algorithm 3, and the other two algorithms are similar. Performances of the three algorithms with backtracking on abovementioned data set are presented in Figure 4. Convergence of modified FISTA is faster than the other two methods. Moreover, we find that the final step size of modified FISTA is larger. 5 Discussion In this paper we prove that iterations of Nesterov’s accelerated method converge to solution of the derived differential equation as step sizes go to zero. We present a new accelerated method to make this convergence faster. We use numerical results to show that the new method is more stable, especially for large step sizes, and explan it using the order of truncation error. We then apply the new method to matrix completion problem and present a new algorithm which we call modified FISTA. Numerical experiments show that modified FISTA performs better than existing algorithms based on Nesterov’s acceleration because it can work with larger step sizes. We will also combine our new method with stochastic gradient-based algorithms and apply the new method to deep networks in the future. Algorithm 3 Modified FISTA with backtracking Input: some β < 1 Initial value. X2 = X1 = X0 ∈ M100, step size s2. (k− 1)th iteration (k ≥ 2). Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1. Find the smallest positive integer ik+1 such that with s = βik+1sk F (X̃) < F (Yk) + ⟨ X̃ − Yk, g(Zk) ⟩ + 1 2 · 2k + 4 ks ∥X̃ − Yk∥2, where X̃ = argmin X { 1 2 · 2k + 4 ks ∥∥∥∥X − (Yk − ks2k + 4g(Zk) )∥∥∥∥2 + λ∥X∥∗ } . Set sk+1 = βik+1sk and compute Xk+1 = X̃. Our work shows that for an accelerated gradient method, the rate at which it converges to the derived differential equation is possibly related to its property as an optimization algorithm. We think this work suggests that more consideration should be given to the corresponding differential equations when studying optimization algorithms. A Appendix A.1 Proof of Theorem 1 Theorem 1. Assume f satisfies L-Lipschitz condition, and solution x(t) of the derived differential equation (1.2) has a continuous third derivative. For fixed time t, the truncation error (2.2) satisfies L[x(t);h] = O(h3). (A.1) Proof. Notice that x(t− h) = x(t) +O(h). Substitute this equation to the last term of L[x(t);h] to get f ( x(t) + t− 3h t (x(t)− x(t− h)) ) = f ( x(t) + t− 3h t · O(h) ) . Since f satisfies L-Lipschitz condition, we know f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =f(x(t)) +O(h) =− ẍ(t)− 3 t ẋ(t) +O(h). To get the second equality, we substitute the differential equation (1.2). Then we expend the first and third terms of L[x(t);h] to third order to get x(t+ h) = x(t) + hx(1)(t) + h2 2 x(2)(t) +O(h3), x(t− h) = x(t)− hx(1)(t) + h 2 2 x(2)(t) +O(h3). Substitute these three equations to (2.2), we have L[x(t);h] = O(h3). Remark 1. (A.1) can also be written as |L[x(t);h]| ≤ M1h3, where M1 depends on sups≤t |x(1)(s)| and sups≤t |x(3)(s)|. Remark 2. Theorem 1 deals with the problem for fixed time t. To finish the proof of the convergence, we have to consider the situation that tn = nh, where n ≥ 1 is fixed. We set a fixed time t0 and assume that tn = nh < t0. Since x(t) has a continuous third derivative, x(t) and its first to third derivative are bounded in [0, t0]. We replace time t in the above proof by tn and expend the terms of (2.2). Now the term −3h 3 2tn x(2)(tn) obtained from the expansion of x(tn−1) cannot be viewed as O(h3), but there exists M2 > 0 such that ∣∣∣∣−3h32tn x(2)(tn) ∣∣∣∣ ≤ M2h2n . As a consequence, we have |L[x(tn);h]| ≤ M1h3 +M2 h2 n , (A.2) where M1 and M2 rely on t0. A.2 Two lemmas for Theorem 2 For the proof of Theorem 2, we need the following two lemmas. Lemma 1. (Holte, 2009) For constant α, β > 0 and positive sequence {ηn}n≥0 satisfying ηn ≤ β + α n−1∑ i=0 ηi, ∀n > 0, the following inequality holds ηn ≤ eαn(β + αη0). The above lemma is a classic result and refered to as discrete Gronwall inequality. Lemma 2. We define matrices Cn and Dn,l as Cn = ( 2n−1 n+1 − n−2 n+1 1 0 ) , Dn,l = CnCn−1 · · ·Cn−l+1, where n ≥ 0 and 0 < l ≤ n+ 1. In addition, we set Dn,0 = I2. Then there exist positive constants M, M3 such that for all n, the following two inequalities hold, where the matrix norm is 2-norm. sup 0≤l≤n+1 ∥Dn,l∥ ≤ Mn, Dn,n+1 ≤ M3. (A.3) Proof. Since C2 = ( 1 0 1 0 ) , we notice that when n ≥ 2, Dn,n−1 = ( 1 0 1 0 ) , Dn,n = ( 1 2 1 2 1 2 1 2 ) , Dn,n+1 = ( 0 1 0 1 ) , having nothing to do with the value of n. So it is obvious that there exists M3 to make (A.3) holds and M4 > 0 such that for all n < 2 or n ≥ 2, l > n− 2 or l = 0, ∥Dn,l∥ ≤ M4n. (A.4) Then we consider the condition when n ≥ 2, 0 < l ≤ n− 2. Notice that Cn = ( 1 1 1 0 )( 1 1 0 n−2n+1 )( 1 1 1 0 )−1 . For convenience, we write P = ( 1 1 1 0 ) . Assume we have alreagy got Dn,l = P ( 1 an,l 0 bn,l ) P−1 satisfying 0 < an,l ≤ l, 0 < bn,l ≤ 1, then since Dn,l+1 = Dn,lCn−l, and 0 ≤ n−l−2n−l+1 < 1, Dn,l+1 has the same form Dn,l+1 = P ( 1 an,l+1 0 bn,l+1 ) P−1, satisfying 0 < an,l+1 ≤ l + 1, 0 < bn,l ≤ 1. Then for fixed n, induce from l = 1, we get Dn,l = PD̃n,lP −1 ≜ P ( 1 an,l 0 bn,l ) P−1, satisfying 0 < an,l ≤ l ≤ n, 0 < bn,l ≤ 1, (A.5) for all n ≥ 2, 0 < l ≤ n− 2. Then we can estimate ∥Dn,l∥. Notice that D̃n,lD̃ T n,l = ( 1 + a2n,l an,lbn,l an,lbn,l a 2 n,l ) . The eigenvalues of this matrix are λ1,2 = 1 + a2n,l + b 2 n,l ± √ (1 + a2n,l + b 2 n,l) 2 − 4b4 2 . Combining this representation with (A.5), we get the estimation ∥D̃n,l∥ = √ max{|λ1|, |λ2|} ≤ √ 1 + a2n,l + b 2 n,l ≤ n+ 2. So there exists M5 > 0, such that for all n ≥ 2, 0 < l ≤ n− 2, inequality ∥Dn,l∥ ≤ M5n (A.6) holds. Combining (A.4) with (A.6), we finish the proof. A.3 Proof of Theorem 2 Theorem 2. Under conditions in Theorem 1, for fixed time t, xt/h converges to x(t) as h goes to zero at a rate of O(h ln 1h ) if x0 = x(0) and x1 = x(h). Proof. In this proof, we first calculate the error caused by a single iteration, which can be divided into an accumulation term and a truncation term. Then we use the estimation given by Theorem 1 and apply discrete Gronwall inequality to prove the convergence. Recall the recurrence relation xn+1 = xn + n− 3 n (xn − xn−1)− h2 · f ( xn + n− 3 n (xn − xn−1) ) and the definition of truncation error x(tn+1) = x(tn) + n− 3 n (x(tn)− x(tn−1))− h2f ( x(tn) + n− 3 n (x(tn)− x(tn−1)) ) + L[x(tn);h], where tn = nh. Subtract the above two equations, and introduce overall error en = x(tn)− xn, we have en+1 = 2n− 3 n en − n− 3 n en−1 − h2bn−1 + L[x(tn);h], which can also be written as en+2 − 2n− 1 n+ 1 en+1 + n− 2 n+ 1 en = −h2bn + L[x(tn+1);h], (A.7) where bn = f ( 2n− 1 n+ 1 xn+1 − n− 2 n+ 1 xn ) − f ( 2n− 1 n+ 1 x(tn+1)− n− 2 n+ 1 x(tn) ) . (A.8) We will also use the notation b∗n = − en+2 − 2n−1n+1 en+1 + n−2 n+1en h2 . Then we rewrite (A.7) into a form that is convenient for recurrence. We set En = ( en+1 en ) , Cn = ( 2n−1 n+1 − n−2 n+1 1 0 ) , Bn = ( −h2b∗n 0 ) . Then (A.7) can be written as En+1 = CnEn +Bn. By recursive method, we have En = Cn−1 · · ·C0E0 + n∑ l=1 Cn−1 · · ·Cn−l+1Bn−l. With the notations introduced in Lemma 2, this equation can be written as En = Dn−1,nE0 + n∑ l=1 Dn−1,l−1Bn−l. (A.9) Now we need to estimate ∥Bn∥. Since f satisfies L-Lipschitz condition, from (A.8) we have |bn| ≤ L ( 2n− 1 n+ 1 |en+1|+ n− 2 n+ 1 |en| ) ≤ L (2|en+1|+ |en|) ≤ 3L∥En∥. and ∥Bn∥ ≤ 3h2L∥En∥+ L[x(tn+1);h]. (A.10) Take norm on both sides of (A.9) and substitute (A.10) and conclusion of Lemma 2, we have the following estimation ∥En∥ ≤ M3∥E0∥+M(n− 1) n−1∑ l=0 ( 3h2L∥El∥+ L[x(tl+1);h] ) ≤ M3∥E0∥+ 3Mnh2L n−1∑ l=0 ∥El∥+Mn n−1∑ l=0 L[x(tl+1);h]. (A.11) Now we deal with truncation errors. Recall (A.2) in remark of Theorem 1 |L[x(tl);h]| ≤ M1h3 +M2 h2 l . Take sum to obtain n−1∑ l=0 |L[x(tl+1);h]| ≤ nM1h3 +M2h2 n−1∑ l=0 1 l + 1 . (A.12) Notice the classic inequality n∑ i=1 1 i ≤ lnn+Me, where Me refers to a positive constant. Substitute it to (A.12), we have n−1∑ l=0 |L[x(tl+1);h]| ≤ nM1h3 +M2h2(lnn+Me). Substitute this inequality to (A.11), we get a control of ∥En∥ ∥En∥ ≤ M3∥E0∥+ 3Mnh2L n−1∑ l=0 ∥El∥+MM1n2h3 +MM2Menh2 +MM2nh2 lnn Using discrete Gronwall inequality, we have ∥En∥ ≤ e3Mn 2h2L ( M3∥E0∥+MM1n2h3 +MM2Menh2 +MM2nh2 lnn+ 3Mnh2L∥E0∥ ) . Then for fixed t, we choose n = th to get ∥Et/h∥ ≤ e3Mt 2L ( (M3 + 3MthL)∥E0∥+ (MM1t2 +MM2Met)h+MM2th ln t h ) . Notice that lim h→0 h ln t h = 0, so if E0 = 0, then the vector form of overall error Et/h satisfies lim h→0 ∥Et/h∥ = 0. A.4 Proof of Theorem 3 Theorem 3. If f has continuous second order derivative, the first and second derivative are bounded, and x(t) has continuous fourth derivative, then for fixed t, truncation error of (3.2) satisfies L[x(t);h] = O(h4). Proof. Recall the proof of Throrem 1. Now we expand x(t− h) to first order x(t− h) = x(t) + hx(1)(t) +O(h2). Then we have f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =f ( x(t) + ( 1− 3h t ) (hx(1)(t) +O(h2)) ) =f ( x(t) + hx(1)(t) +O(h2) ) =f ( x(t) + hx(1)(t) ) +O(h2). We now expand f : f ( x(t) + t− 3h t (x(t)− x(t− h)) ) = f(x(t)) + hx(1)(t)f (1)(x(t)) +O(h2). To do this, we need f has continuous second derivative and the second derivative is bounded. Take derivetive on both sides of differential equation ẍ+ 3 t ẋ+ f(x) = 0, we have (f(x(t))) ′ = −x(3)(t)− 3 t x(2)(t) + 3 t2 x(1)(t). So f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =− hx(3)(t)− ( 3h t + 1 ) x(2)(t) + ( 3h t2 − 3 t ) x(1)(t) +O(h2). (A.13) Expand x(t+ h), x(t− h), x(t− 2h) to the third order, we have( α1 + β1h t + γ1h 2 t2 ) x(t+ h) = ( α1 + β1h t + γ1h 2 t2 ) [ x(t) + hx(1)(t) + h2 2 x(2)(t) + h3 6 x(3)(t) +O(h4) ] ,( α3 + β3h t + γ3h 2 t2 ) x(t− h) = ( α3 + β3h t + γ3h 2 t2 ) [ x(t)− hx(1)(t) + h 2 2 x(2)(t)− h 3 6 x(3)(t) +O(h4) ] ,( α4 + β4h t + γ4h 2 t2 ) x(t− 2h) = ( α4 + β4h t + γ4h 2 t2 ) [ x(t)− 2hx(1)(t) + 2h2x(2)(t)− 4h 3 3 x(3)(t) +O(h4) ] . Substitute these three equations and (A.13) to truncation error of recurrence relation (3.1) L[x(t);h] = 4∑ i=1 ( αi + βih t + γih 2 t2 ) x(t+ (2− i)h) + h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) , then simple calculation shows that terms with order less than four will be eliminated if we choose coefficients according to the following equations α1 = 2 α2 = −5 α3 = 4 α4 = −1 , β1 = 9 2 − k β2 = −6 + 3k β3 = 3 2 − 3k β4 = k , γ1 = m1 γ2 = − 3m1 +m2 + 3 2 γ3 = m2 γ4 = m1 −m2 + 3 2 , where k, m1, m2 can be chosen randomly. Notice that coefficients of recurrence relation (3.2) satisfy above equations. A.5 Algorithms Algorithm 4 FISTA Input: step size s Initial value: Y1 = X0 ∈ M100, t1 = 1. kth iteration (k ≥ 1). Compute Xk = argmin X { 1 2s ∥X − (Yk − sg(Yk))∥2 + λ∥X∥∗ } , tk+1 = 1 + √ 1 + 4t2k 2 , Yk+1 = Xk + tk − 1 tk+1 (Xk −Xk−1). Algorithm 5 Accelerated proximal gradient method Input: step size s Initial value: X1 = X0 ∈ M100. kth iteration (k ≥ 1). Compute Yk = Xk−1 + k − 3 k (Xk−1 −Xk−2), Xk+1 = argmin X { 1 2s ∥X − (Yk − sg(Yk))∥2 + λ∥X∥∗ } . A.6 Details about Numerical Experiments in Section 4 Here we produce some details for our numerical experiments in Section 4. Our experiments are taken on a simulated data set. Firstly, we generated the ‘true’ low rank matrix M . To do this, we generate a random matrix M0. Entries of M0 are independent and uniformly distributed on (0, 20). Then we compute the singular value decomposition of M0, that is, M0 = UΣV T. After that, we set M = UΣ0V T, where Σ0 is a diagonal matrix with only three nonzero diagonal elements. It is not difficult to prove that M has rank 3. Secondly, we generate the observation set. For every row of M , we choose randomly ten entrys to be observed. As a consequence, 10% entries are observed in total. After data generation step, we apply the abovementioned algorithms (accelerated proximal gradient method, FISTA and our modified FISTA) with fixed step sizes and backtracking to this data set. The parameter of the loss function (4.1) is λ = 0.005. For initial point, we simply choose the zero matrix (every entry equals to zero). For backtracking, we set the initial step size as 10 and the decay factor β = 0.1.
1. What is the focus of the paper regarding accelerated methods for ordinary differential equations? 2. What are the strengths and weaknesses of the proposed approach compared to prior works like Nesterov's method? 3. Do you have any concerns or questions about the numerical experiments presented in the paper? 4. How does the reviewer assess the significance and novelty of the paper's contribution? 5. Are there any minor comments or suggestions for improvement that the reviewer would like to mention?
Review
Review summary: This paper proposes an accelerated method that has a high-order truncation error O ( h 4 ) to the ordinary differential equation x ¨ + 3 t x ˙ + f ( x ) = 0 obtained from Nesterov's accelerated method by (Su et al., 2014), while Nesterov's method has O ( h 3 ) error. This implies that the iterates of the proposed method converge to the trajectory of the differential equation faster than those of Nesterov's method. The two toy numerical experiments illustrate such phenomenon for certain large step size. A matrix completion problem experiment is further included. strong point: Finding a method that has a high-order truncation error seems new and interesting, and numerical experiment suggests that such method performs better. weak points: There is no theoretical guarantee on the convergence (rate) to a solution of an optimization problem. The reason why we care "large" step sizes seem insufficient, while Nesterov's method is stable for normal step sizes (e.g., 1 / L ). It is not clear when the step size is considered large, other than using an exhaustive search. Unlike Nesterov's method, the interval of step sizes that guarantee convergence to a solution is not known for the proposed method. Numerical experiments are limited. minor comments: page 1: | | F ( x n ) − F ( x ∗ ) | | → F ( x n ) − F ( x ∗ ) page 6: what is the Lipschitz constant for this experiment? a figure of two-dimensional toy example could help better illustrate the effect of the truncation error.
ICLR
Title A new accelerated gradient method inspired by continuous-time perspective Abstract Nesterov’s accelerated method are widely used in problems with machine learning background including deep learning. To give more insight about the acceleration phenomenon, an ordinary differential equation was obtained from Nesterov’s accelerated method by taking step sizes approaching zero, and the relationship between Nesterov’s method and the differential equation is still of research interest. In this work, we give the precise order of the iterations of Nesterov’s accelerated method converging to the solution of derived differential equation as step sizes go to zero. We then present a new accelerated method with higher order. The new method is more stable than ordinary method for large step size and converges faster. We further apply the new method to matrix completion problem and show its better performance through numerical experiments. 1 Introduction Optimization is a core component of statistic and machine learning problems. Recently, gradientbased algorithms are widely used in such optimization problems due to its simplicity and efficiency for large-scale situations. For solving convex optimization problem min x∈Rd F (x), where F (x) is convex and sufficiently smooth, a classical first-order method is gradient descent. We assume that f(x) = ∇F (x) satisfies L-Lipschitz condition, that is, there exists constant L such that ∥f(x)− f(y)∥ ≤ L∥x− y∥, ∀x, y. Under these conditions, gradient descent achieves a convergence rate of O(n−1), i.e., ∥F (xn) − F (x∗)∥ decreases to zero at a rate of O(n−1), where xn denotes the nth iteration and x∗ denotes the minimum point of F (x) in Rd. Nesterov’s accelerated method (Nesterov, 1983) is a more efficient first-order algorithm than gradient descent, of which we will use the following form: starting with x0 = x1, yn = xn + n− 3 n (xn − xn−1), xn+1 = yn − sf(yn) (1.1) for n ≥ 1. It is shown that under abovementioned conditions, Nesterov’s accelerated method converges at a rate of O(n−2). Accelerated gradient method has been successful in training deep and recurrent neural networks (Sutskever et al., 2013) and is widely used in problems with machine learning background to avoid sophisticated second-order methods (Cotter et al., 2011; Hu et al., 2009; Ji & Ye, 2009). To provide more theorical understanding, an important research topic of Nesterov’s accelerated method is to find an explanation of the acceleration. On this topic, Nesterov’s method was studied via a continuous-time perspective (Su et al., 2014). They considered a curve x(t), introduced the ansatz xn ≈ x(n √ s) and substituted it to (1.1). Letting s → 0, they obtained the following differential equation. ẍ+ 3 t ẋ+ f(x) = 0. (1.2) The differential equation was used as a tool for analyzing and generalizing Nesterov’s scheme. Furthermore, this idea has been studied from different directions. A class of accelerated methods have been generated in continuous-time (Wibisono et al., 2016). ODE (1.2) can also be discretized directly using Runge-Kutta method to achieve acceleration (Zhang et al., 2018). Although many results have been achieved, the process of obtaining the differential equation (1.2) has not been rigorous, and the method is still time-consuming for large-scale problems. In this work, we give the precise order of the iterations of Nesterov’s accelerated method converging to solution of the differential equation (1.2) with initial conditions x(0) = x0, ẋ(0) = 0 (1.3) as step size s goes to zero. Inspired from this perspective, we present a new accelerated method to make this convergence faster. As we expected, iterations of the new method are closer to the solution x(t) of differential equation (1.2) than original Nesterov’s method. Moreover, we find the new method is more stable than original Nesterov’s method when step size is large. Based on abovementioned observations, we try to take advantage of the new method in more practical problems. We apply the new method to matrix completion problem. We combine the new method with proximal operator (Parikh & Boyd, 2014) into a new algorithm, which we call modified FISTA. We find that the new method performs better than FISTA (Beck & Teboulle, 2009) and acclerated proximal gradient method (Parikh & Boyd, 2014) because it can work with larger step sizes. This paper is organized as follows. In section 2, we prove that iterations of Nesterov’s accelerated method converge to solution of the differential equation (1.2). In section 3, we present a new method to make the convergence faster and show its better stablity through two simple examples. In section 4, we apply the new method to matrix completion problem. 2 A strict analysis of the relation between Nesterov’s method and its continuous-time limit We refer to x(t) as the solution of differential equation (1.2) with initial conditions (1.3). Existance and uniqueness of such solutions have been proved (Su et al., 2014). In this section, We give the order of the iterations of Nesterov’s accelerated method converging to x(t) as step sizes go to zero. For convenience, we substitute the first equation in Nesterov’s method (1.1) to the second one to get xn+1 = xn + n− 3 n (xn − xn−1)− s · f ( xn + n− 3 n (xn − xn−1) ) . We write s = h2 and rewrite the above recurrence relation as xn+1 = xn + n− 3 n (xn − xn−1)− h2 · f ( xn + n− 3 n (xn − xn−1) ) . (2.1) Inspired by the ansatz xn ≈ x(n √ s) (Su et al., 2014), we consider the convergence between xn and x(nh). More precisely, we show that for fixed time t, xn converges to x(t) as h goes to zero, where n = th . 2.1 Truncation error Firstly, we consider the following ‘truncation error’: L[x(t);h] =x(t+ h)− 2t− 3h t x(t) + t− 3h t x(t− h)+ h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) . (2.2) (2.2) is obtained from (2.1) by replacing xn+1, xn, xn−1 with x(t+h), x(t), x(t−h) and substituting the relation n = th . Our first result is the order of truncation error L[x(t);h]. Theorem 1. Assume f satisfies L-Lipschitz condition, and solution x(t) of the derived differential equation (1.2) has a continuous third derivative. For fixed time t, the truncation error (2.2) satisfies L[x(t);h] = O(h3). Theorem 1 shows the size of error caused by a single iteration when the starting point is just on x(t). Then we have to add up these errors to prove the convergence proporty we need. 2.2 Convergence theorem We now come to the convergence theorem. In this theorem, we give the precise order of the iterations of Nesterov’s method converging to solution of the derived differential equation. Theorem 2. Under conditions in Theorem 1, for fixed time t, xt/h converges to x(t) as h goes to zero at a rate of O(h ln 1h ) if x0 = x(0) and x1 = x(h). Theorem 2 coincides with derivation of ODE (1.2) (Su et al., 2014). 3 New accelerated method 3.1 Derivation of the new method and analysis of truncation error Inspired from the continuous-time perspective and our proof of the convergence from iterations of Nesterov’s method to its continuous-time limit, we present a new method to make this convergence faster. Precisely, the new method has a higher truncation order. We need one more step in our scheme than in Nesterov’s method to achieve higher truncation order in the following analysis, so we consider a recurrence relation with form 4∑ i=1 ( αi + βi n + γi n2 ) xn+2−i = −sf ( xn + n− 3 n (xn − xn−1) ) , (3.1) where {αi}, {βi} and {γi} are to be determined. Now we expand x(t− h) to first order. Calculation shows that f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =− hx(3)(t)− ( 3h t + 1 ) x(2)(t) + ( 3h t2 − 3 t ) x(1)(t) +O(h2). Substitute this expansion to truncation error L[x(t);h] = 4∑ i=1 ( αi + βih t + γih 2 t2 ) x(t+ (2− i)h) + h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) , and choose parameters appropriately to eliminate low-order terms, we get the following recurrence relation xn+1 = 10n2 + 9n+ 6 4n2 + 8n xn − 4n2 + 3 2n2 + 4n xn−1 + 2n− 1 4n+ 8 xn−2 − n 2n+ 4 sf ( 2n− 3 n xn − n− 3 n xn−1 ) . (3.2) Here we rewrite this scheme as Algorithm 1. Algorithm 1 The new method (3.2) Input: step size s Initial value: X2 = X1 = X0. (k− 1)th iteration (k ≥ 2). Compute Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1, Xk+1 = X − ( Yk − ks 2k + 4 f(Zk) ) . For truncation order of this new method, we have the following theorem. The abovementioned procedure is presented in Appendix A.4 detailedly, as proof of Theorem 3. Theorem 3. If f has continuous second order derivative, the first and second derivative are bounded, and x(t) has continuous fourth derivative, then for fixed t, truncation error of (3.2) satisfies L[x(tn);h] = O(h4). The convergence of the new method and x(t) can be proved similar to Theorem 2. 3.2 Advantage of the new method Since the new method has a truncation error of higher order than original Neaterov’s method, the iterations of the new method converge to the differential equation (1.2) when those of original Nesterov’s method diverge. In another word, the new method is more stable for large step size. We present two numerical results in Figure 1 to confirm it. Quadratic. F (x) = xTAx is a strongly convex function, in which x ∈ R2 and A is a 2× 2 matrix. Linear regression. F (x) = n∑ i=1 (yi −wTi x)2, where n is the number of samples and (wi, yi) is the ith sample. In these examples, iterations of the new method converge to the minimum point, while those of original Nesterov’s method diverge, which confirms that the new method is more stable for large step size. 3.3 Absolute stability of Nesterov’s method and the new method In this subsection, we explain the better stability of the new method with absolute stability theory. Firstly, recall the scheme of our new method xn+1 = 10n2 + 9n+ 6 4n2 + 8n xn − 4n2 + 3 2n2 + 4n xn−1 + 2n− 1 4n+ 8 xn−2 − n 2n+ 4 sf ( 2n− 3 n xn − n− 3 n xn−1 ) . We use linear approximation f ( xn + n− 3 n (xn − xn−1) ) = ∇F ( xn + n− 3 n (xn − xn−1) ) ≈ ∇2F · ( xn + n− 3 n (xn − xn−1) ) , and the characteristic equation of this finite scheme is approximately λ3− ( 10n2 + 9n+ 6 4n2 + 8n − s · ∇2F · 2n 2 − 3n 2n2 + 4n ) λ2+ ( 4n2 + 3 2n2 + 4n − s · ∇2F · n 2 − 3n 2n2 + 4n ) λ− 2n− 1 4n+ 8 = 0. For large n, we can ignore the high order terms and the characteristic equation becomes λ3 − ( 5 2 − s · ∇2F · ) λ2 + ( 2− s 2 · ∇2F ) λ− 1 2 = 0. According to the absolute stability theory, the numerical stability of Nesterov’s scheme with respect to accumulated roundoff error is equivalent to this: all the roots of the characteristic equation lie in the unit circle (Leader, 2004). Noticing that the left hand of the equation can be factorized to( λ− 1 2 )( λ2 − (2− s · ∇2F )λ+ 1 ) , the largest modulu of the roots is 1 when 0 ≤ s · ∇2F ≤ 4, and the absolutely stable region of the new method is s · ∇2F ∈ [0, 4]. When s·∇2F lies in the absoletely stable region, the related theory guarantees that the error caused by every iteration will not be magnified as the iteration number increases. To make the analysis more precise, we should consider the difference of the scheme between iterations caused by different n. We define the transfer matrix Pn = ( 10n2+9n+6 4n2+8n − s · ∇ 2F · 2n 2−3n 2n2+4n ) − ( 4n2+3 2n2+4n − s · ∇ 2F · n 2−3n 2n2+4n ) 2n−1 4n+8 1 0 0 0 1 0 and Qn = PnPn−1 · · ·P1. Error analysis shows that if the largest modulu of eigenvalues of Qn goes to zero, then error caused by iterations will be eliminated as the iteration number increases. Figure 2 presents the largest module of eigenvalues of Qn for different values of s · ∇2F . From the experiment we can see that the above condition is satisfied. We then apply the same method to Nesterov’s method discussed in (Su et al., 2014) and conclude that the absolutely stable region of Nesterov’s method is [0, 43 ]. According to the above analysis, the absolutely stable region of the new method is four times as large as Nesterov’s method, so the new method is more stable, and we can choose larger step sizes to achieve faster convergence. 4 Application to matrix completion problem: modified FISTA Our theory and numerical results show that the new method is more stable than original Nestrov’s method. So we can choose larger step size for new method and convergence to the optimal solution can be faster, compared with original Nesterov’s method. In this section we apply the new method to matrix completion problem. We present a new algorithm which can be viewed as a modification of the well-konwn fast iterative shrinkage-thresholding algorithm (FISTA) (Beck & Teboulle, 2009). The performance of modified FISTA can also confirm the advantage of the new method. For matrix completion problem there exists a ‘true’ low rank matrix M . We are given some entries of M and asked to fill missing entries. There have been various algorithms to solve such problem (Candès & Recht, 2009; Keshavan et al., 2010). Besides, it is proposed that matrix completion can be transformed to the following unconstrained optimization problem (Mazumder et al., 2010) minF (X) = 1 2 ∥Xobs −Mobs∥2 + λ∥X∥∗. (4.1) Notice that F (X) is composed of a smooth term and a non-smooth term, so gradient-based algorithms cannot be used directly. Proximal gradient algorithms (Parikh & Boyd, 2014) are widely used in such composite optimization problems, and fast iterative shrinkage-thresholding algorithm (FISTA) is a successful algorithm. Moreover, FISTA has been extended to matrix completion case (Ji & Ye, 2009). For convenience, we set G(X) = 12∥Xobs − Mobs∥2, H(X) = λ∥X∥∗, and g(X) = ∇G(X). The idea of FISTA builds on Nesterov’s method. We also apply acclerated proximal gradient method (Parikh & Boyd, 2014) for our numerical experiment, which is composed of Nesterov’s method and proximal gradient descent. These two algorithms are presented in Appendix A.5. We find the performances of them are similar in our experiments. Our contribution is the third method (Algorithm 2), the new method (3.2) combined with proximal operator, which we call modified FISTA. Algorithm 2 Modified FISTA Input: step size s Initial value: X2 = X1 = X0 ∈ M100. (k− 1)th iteration (k ≥ 2). Compute Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1, Xk+1 = argmin X { 1 2 · 2k + 4 ks ∥∥∥∥X − (Yk − ks2k + 4g(Zk) )∥∥∥∥2 + λ∥X∥∗ } . Notice that the minimizing problems in interations of above three algorithms can be solved directly by singular value decomposition (Cai & Candès, 2010). We take experiments on a simulated data set. We use fixed step sizes in the above three algorithms, and the performances are presented in Figure 3. We find empirically that for all methods, convergence is faster when step size is larger, so we choose the largest step sizes for all methods to compare their fastest convergence speed. Through experiments, we find the largest step size that makes modified FISTA convergent is 4.1 (accurate to one decimal place), while those for the first two algorithms are both 1.3. We also compare performances of the three methods with step sizes reduced from the largest in equal proportion. We find that when step sizes are chosen to be the largest or reduced from the largest in equal proportion (80%, 50%, 10%), convergence of modified FISTA is faster than the other two methods. We also combine the three methods with backtracking (Beck & Teboulle, 2009) to choose step sizes automatically. We present modified FISTA with backtracking as Algorithm 3, and the other two algorithms are similar. Performances of the three algorithms with backtracking on abovementioned data set are presented in Figure 4. Convergence of modified FISTA is faster than the other two methods. Moreover, we find that the final step size of modified FISTA is larger. 5 Discussion In this paper we prove that iterations of Nesterov’s accelerated method converge to solution of the derived differential equation as step sizes go to zero. We present a new accelerated method to make this convergence faster. We use numerical results to show that the new method is more stable, especially for large step sizes, and explan it using the order of truncation error. We then apply the new method to matrix completion problem and present a new algorithm which we call modified FISTA. Numerical experiments show that modified FISTA performs better than existing algorithms based on Nesterov’s acceleration because it can work with larger step sizes. We will also combine our new method with stochastic gradient-based algorithms and apply the new method to deep networks in the future. Algorithm 3 Modified FISTA with backtracking Input: some β < 1 Initial value. X2 = X1 = X0 ∈ M100, step size s2. (k− 1)th iteration (k ≥ 2). Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1. Find the smallest positive integer ik+1 such that with s = βik+1sk F (X̃) < F (Yk) + ⟨ X̃ − Yk, g(Zk) ⟩ + 1 2 · 2k + 4 ks ∥X̃ − Yk∥2, where X̃ = argmin X { 1 2 · 2k + 4 ks ∥∥∥∥X − (Yk − ks2k + 4g(Zk) )∥∥∥∥2 + λ∥X∥∗ } . Set sk+1 = βik+1sk and compute Xk+1 = X̃. Our work shows that for an accelerated gradient method, the rate at which it converges to the derived differential equation is possibly related to its property as an optimization algorithm. We think this work suggests that more consideration should be given to the corresponding differential equations when studying optimization algorithms. A Appendix A.1 Proof of Theorem 1 Theorem 1. Assume f satisfies L-Lipschitz condition, and solution x(t) of the derived differential equation (1.2) has a continuous third derivative. For fixed time t, the truncation error (2.2) satisfies L[x(t);h] = O(h3). (A.1) Proof. Notice that x(t− h) = x(t) +O(h). Substitute this equation to the last term of L[x(t);h] to get f ( x(t) + t− 3h t (x(t)− x(t− h)) ) = f ( x(t) + t− 3h t · O(h) ) . Since f satisfies L-Lipschitz condition, we know f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =f(x(t)) +O(h) =− ẍ(t)− 3 t ẋ(t) +O(h). To get the second equality, we substitute the differential equation (1.2). Then we expend the first and third terms of L[x(t);h] to third order to get x(t+ h) = x(t) + hx(1)(t) + h2 2 x(2)(t) +O(h3), x(t− h) = x(t)− hx(1)(t) + h 2 2 x(2)(t) +O(h3). Substitute these three equations to (2.2), we have L[x(t);h] = O(h3). Remark 1. (A.1) can also be written as |L[x(t);h]| ≤ M1h3, where M1 depends on sups≤t |x(1)(s)| and sups≤t |x(3)(s)|. Remark 2. Theorem 1 deals with the problem for fixed time t. To finish the proof of the convergence, we have to consider the situation that tn = nh, where n ≥ 1 is fixed. We set a fixed time t0 and assume that tn = nh < t0. Since x(t) has a continuous third derivative, x(t) and its first to third derivative are bounded in [0, t0]. We replace time t in the above proof by tn and expend the terms of (2.2). Now the term −3h 3 2tn x(2)(tn) obtained from the expansion of x(tn−1) cannot be viewed as O(h3), but there exists M2 > 0 such that ∣∣∣∣−3h32tn x(2)(tn) ∣∣∣∣ ≤ M2h2n . As a consequence, we have |L[x(tn);h]| ≤ M1h3 +M2 h2 n , (A.2) where M1 and M2 rely on t0. A.2 Two lemmas for Theorem 2 For the proof of Theorem 2, we need the following two lemmas. Lemma 1. (Holte, 2009) For constant α, β > 0 and positive sequence {ηn}n≥0 satisfying ηn ≤ β + α n−1∑ i=0 ηi, ∀n > 0, the following inequality holds ηn ≤ eαn(β + αη0). The above lemma is a classic result and refered to as discrete Gronwall inequality. Lemma 2. We define matrices Cn and Dn,l as Cn = ( 2n−1 n+1 − n−2 n+1 1 0 ) , Dn,l = CnCn−1 · · ·Cn−l+1, where n ≥ 0 and 0 < l ≤ n+ 1. In addition, we set Dn,0 = I2. Then there exist positive constants M, M3 such that for all n, the following two inequalities hold, where the matrix norm is 2-norm. sup 0≤l≤n+1 ∥Dn,l∥ ≤ Mn, Dn,n+1 ≤ M3. (A.3) Proof. Since C2 = ( 1 0 1 0 ) , we notice that when n ≥ 2, Dn,n−1 = ( 1 0 1 0 ) , Dn,n = ( 1 2 1 2 1 2 1 2 ) , Dn,n+1 = ( 0 1 0 1 ) , having nothing to do with the value of n. So it is obvious that there exists M3 to make (A.3) holds and M4 > 0 such that for all n < 2 or n ≥ 2, l > n− 2 or l = 0, ∥Dn,l∥ ≤ M4n. (A.4) Then we consider the condition when n ≥ 2, 0 < l ≤ n− 2. Notice that Cn = ( 1 1 1 0 )( 1 1 0 n−2n+1 )( 1 1 1 0 )−1 . For convenience, we write P = ( 1 1 1 0 ) . Assume we have alreagy got Dn,l = P ( 1 an,l 0 bn,l ) P−1 satisfying 0 < an,l ≤ l, 0 < bn,l ≤ 1, then since Dn,l+1 = Dn,lCn−l, and 0 ≤ n−l−2n−l+1 < 1, Dn,l+1 has the same form Dn,l+1 = P ( 1 an,l+1 0 bn,l+1 ) P−1, satisfying 0 < an,l+1 ≤ l + 1, 0 < bn,l ≤ 1. Then for fixed n, induce from l = 1, we get Dn,l = PD̃n,lP −1 ≜ P ( 1 an,l 0 bn,l ) P−1, satisfying 0 < an,l ≤ l ≤ n, 0 < bn,l ≤ 1, (A.5) for all n ≥ 2, 0 < l ≤ n− 2. Then we can estimate ∥Dn,l∥. Notice that D̃n,lD̃ T n,l = ( 1 + a2n,l an,lbn,l an,lbn,l a 2 n,l ) . The eigenvalues of this matrix are λ1,2 = 1 + a2n,l + b 2 n,l ± √ (1 + a2n,l + b 2 n,l) 2 − 4b4 2 . Combining this representation with (A.5), we get the estimation ∥D̃n,l∥ = √ max{|λ1|, |λ2|} ≤ √ 1 + a2n,l + b 2 n,l ≤ n+ 2. So there exists M5 > 0, such that for all n ≥ 2, 0 < l ≤ n− 2, inequality ∥Dn,l∥ ≤ M5n (A.6) holds. Combining (A.4) with (A.6), we finish the proof. A.3 Proof of Theorem 2 Theorem 2. Under conditions in Theorem 1, for fixed time t, xt/h converges to x(t) as h goes to zero at a rate of O(h ln 1h ) if x0 = x(0) and x1 = x(h). Proof. In this proof, we first calculate the error caused by a single iteration, which can be divided into an accumulation term and a truncation term. Then we use the estimation given by Theorem 1 and apply discrete Gronwall inequality to prove the convergence. Recall the recurrence relation xn+1 = xn + n− 3 n (xn − xn−1)− h2 · f ( xn + n− 3 n (xn − xn−1) ) and the definition of truncation error x(tn+1) = x(tn) + n− 3 n (x(tn)− x(tn−1))− h2f ( x(tn) + n− 3 n (x(tn)− x(tn−1)) ) + L[x(tn);h], where tn = nh. Subtract the above two equations, and introduce overall error en = x(tn)− xn, we have en+1 = 2n− 3 n en − n− 3 n en−1 − h2bn−1 + L[x(tn);h], which can also be written as en+2 − 2n− 1 n+ 1 en+1 + n− 2 n+ 1 en = −h2bn + L[x(tn+1);h], (A.7) where bn = f ( 2n− 1 n+ 1 xn+1 − n− 2 n+ 1 xn ) − f ( 2n− 1 n+ 1 x(tn+1)− n− 2 n+ 1 x(tn) ) . (A.8) We will also use the notation b∗n = − en+2 − 2n−1n+1 en+1 + n−2 n+1en h2 . Then we rewrite (A.7) into a form that is convenient for recurrence. We set En = ( en+1 en ) , Cn = ( 2n−1 n+1 − n−2 n+1 1 0 ) , Bn = ( −h2b∗n 0 ) . Then (A.7) can be written as En+1 = CnEn +Bn. By recursive method, we have En = Cn−1 · · ·C0E0 + n∑ l=1 Cn−1 · · ·Cn−l+1Bn−l. With the notations introduced in Lemma 2, this equation can be written as En = Dn−1,nE0 + n∑ l=1 Dn−1,l−1Bn−l. (A.9) Now we need to estimate ∥Bn∥. Since f satisfies L-Lipschitz condition, from (A.8) we have |bn| ≤ L ( 2n− 1 n+ 1 |en+1|+ n− 2 n+ 1 |en| ) ≤ L (2|en+1|+ |en|) ≤ 3L∥En∥. and ∥Bn∥ ≤ 3h2L∥En∥+ L[x(tn+1);h]. (A.10) Take norm on both sides of (A.9) and substitute (A.10) and conclusion of Lemma 2, we have the following estimation ∥En∥ ≤ M3∥E0∥+M(n− 1) n−1∑ l=0 ( 3h2L∥El∥+ L[x(tl+1);h] ) ≤ M3∥E0∥+ 3Mnh2L n−1∑ l=0 ∥El∥+Mn n−1∑ l=0 L[x(tl+1);h]. (A.11) Now we deal with truncation errors. Recall (A.2) in remark of Theorem 1 |L[x(tl);h]| ≤ M1h3 +M2 h2 l . Take sum to obtain n−1∑ l=0 |L[x(tl+1);h]| ≤ nM1h3 +M2h2 n−1∑ l=0 1 l + 1 . (A.12) Notice the classic inequality n∑ i=1 1 i ≤ lnn+Me, where Me refers to a positive constant. Substitute it to (A.12), we have n−1∑ l=0 |L[x(tl+1);h]| ≤ nM1h3 +M2h2(lnn+Me). Substitute this inequality to (A.11), we get a control of ∥En∥ ∥En∥ ≤ M3∥E0∥+ 3Mnh2L n−1∑ l=0 ∥El∥+MM1n2h3 +MM2Menh2 +MM2nh2 lnn Using discrete Gronwall inequality, we have ∥En∥ ≤ e3Mn 2h2L ( M3∥E0∥+MM1n2h3 +MM2Menh2 +MM2nh2 lnn+ 3Mnh2L∥E0∥ ) . Then for fixed t, we choose n = th to get ∥Et/h∥ ≤ e3Mt 2L ( (M3 + 3MthL)∥E0∥+ (MM1t2 +MM2Met)h+MM2th ln t h ) . Notice that lim h→0 h ln t h = 0, so if E0 = 0, then the vector form of overall error Et/h satisfies lim h→0 ∥Et/h∥ = 0. A.4 Proof of Theorem 3 Theorem 3. If f has continuous second order derivative, the first and second derivative are bounded, and x(t) has continuous fourth derivative, then for fixed t, truncation error of (3.2) satisfies L[x(t);h] = O(h4). Proof. Recall the proof of Throrem 1. Now we expand x(t− h) to first order x(t− h) = x(t) + hx(1)(t) +O(h2). Then we have f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =f ( x(t) + ( 1− 3h t ) (hx(1)(t) +O(h2)) ) =f ( x(t) + hx(1)(t) +O(h2) ) =f ( x(t) + hx(1)(t) ) +O(h2). We now expand f : f ( x(t) + t− 3h t (x(t)− x(t− h)) ) = f(x(t)) + hx(1)(t)f (1)(x(t)) +O(h2). To do this, we need f has continuous second derivative and the second derivative is bounded. Take derivetive on both sides of differential equation ẍ+ 3 t ẋ+ f(x) = 0, we have (f(x(t))) ′ = −x(3)(t)− 3 t x(2)(t) + 3 t2 x(1)(t). So f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =− hx(3)(t)− ( 3h t + 1 ) x(2)(t) + ( 3h t2 − 3 t ) x(1)(t) +O(h2). (A.13) Expand x(t+ h), x(t− h), x(t− 2h) to the third order, we have( α1 + β1h t + γ1h 2 t2 ) x(t+ h) = ( α1 + β1h t + γ1h 2 t2 ) [ x(t) + hx(1)(t) + h2 2 x(2)(t) + h3 6 x(3)(t) +O(h4) ] ,( α3 + β3h t + γ3h 2 t2 ) x(t− h) = ( α3 + β3h t + γ3h 2 t2 ) [ x(t)− hx(1)(t) + h 2 2 x(2)(t)− h 3 6 x(3)(t) +O(h4) ] ,( α4 + β4h t + γ4h 2 t2 ) x(t− 2h) = ( α4 + β4h t + γ4h 2 t2 ) [ x(t)− 2hx(1)(t) + 2h2x(2)(t)− 4h 3 3 x(3)(t) +O(h4) ] . Substitute these three equations and (A.13) to truncation error of recurrence relation (3.1) L[x(t);h] = 4∑ i=1 ( αi + βih t + γih 2 t2 ) x(t+ (2− i)h) + h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) , then simple calculation shows that terms with order less than four will be eliminated if we choose coefficients according to the following equations α1 = 2 α2 = −5 α3 = 4 α4 = −1 , β1 = 9 2 − k β2 = −6 + 3k β3 = 3 2 − 3k β4 = k , γ1 = m1 γ2 = − 3m1 +m2 + 3 2 γ3 = m2 γ4 = m1 −m2 + 3 2 , where k, m1, m2 can be chosen randomly. Notice that coefficients of recurrence relation (3.2) satisfy above equations. A.5 Algorithms Algorithm 4 FISTA Input: step size s Initial value: Y1 = X0 ∈ M100, t1 = 1. kth iteration (k ≥ 1). Compute Xk = argmin X { 1 2s ∥X − (Yk − sg(Yk))∥2 + λ∥X∥∗ } , tk+1 = 1 + √ 1 + 4t2k 2 , Yk+1 = Xk + tk − 1 tk+1 (Xk −Xk−1). Algorithm 5 Accelerated proximal gradient method Input: step size s Initial value: X1 = X0 ∈ M100. kth iteration (k ≥ 1). Compute Yk = Xk−1 + k − 3 k (Xk−1 −Xk−2), Xk+1 = argmin X { 1 2s ∥X − (Yk − sg(Yk))∥2 + λ∥X∥∗ } . A.6 Details about Numerical Experiments in Section 4 Here we produce some details for our numerical experiments in Section 4. Our experiments are taken on a simulated data set. Firstly, we generated the ‘true’ low rank matrix M . To do this, we generate a random matrix M0. Entries of M0 are independent and uniformly distributed on (0, 20). Then we compute the singular value decomposition of M0, that is, M0 = UΣV T. After that, we set M = UΣ0V T, where Σ0 is a diagonal matrix with only three nonzero diagonal elements. It is not difficult to prove that M has rank 3. Secondly, we generate the observation set. For every row of M , we choose randomly ten entrys to be observed. As a consequence, 10% entries are observed in total. After data generation step, we apply the abovementioned algorithms (accelerated proximal gradient method, FISTA and our modified FISTA) with fixed step sizes and backtracking to this data set. The parameter of the loss function (4.1) is λ = 0.005. For initial point, we simply choose the zero matrix (every entry equals to zero). For backtracking, we set the initial step size as 10 and the decay factor β = 0.1.
1. What is the main contribution of the paper regarding Nesterov's accelerated gradient method? 2. What are the strengths of the proposed modified FISTA algorithm? 3. Do you have any concerns regarding the paper's presentation and flow? 4. How does the reviewer assess the convergence rate improvement of the new accelerated method? 5. Can you explain the numerical results and the reason behind the erratic patterns in the plot? 6. Is there a possibility to provide a more detailed motivation for the derivations in Section 3? 7. How does the reviewer evaluate the applicability of the proposed method in other problems beyond matrix completion? 8. Are there any suggestions to improve the readability and understandability of the paper?
Review
Review UPDATE: After reading through all other reviews and responses by the authors, I share the concern that the theoretical justification of the paper is lacking as the connection between the truncation error and the improved algorithm performance is not rigorously proven. Therefore, I have reduced my score. Summary: The paper studies the well-known Nesterov's accelerated gradient method and shows the rate of convergence to the solution of an ordinary differential equation recently proposed by [Su et al, 2014]. Motivated by the proof, the authors then derive a new accelerated method with a faster rate of convergence than the original Nesterov's method, which is shown to be more stable than the original Nesterov's method when the step size is large. The method is combined with the proximal operator into a new algorithm referred to as modified FISTA, which is then applied to the matrix completion problem. Strengths: The paper proves the convergence rate of Nesterov's method to the ODE proposed by [Su et al, 2014]. This proof then motivates them to derive a new faster accelerated method where the truncation error has a higher order of O ( h 4 ) compared to O ( h 3 ) in case of Nesterov's method. It is shown in two simple examples that the new method is more stable as it can work with larger step sizes. The method is applied to a matrix completion problem, where it is shown to have faster convergence than standard FISTA and Nesterov's gradient method. Concerns: In Section 2.2., the purpose of Lemma 1 and Lemma 2 is not clear without looking into the proof of Theorem 2 in the supplementary material. The flow of the paper could be improved if an intuition was given of which role they play in the proof of Theorem 2. Similarly, to understand the motivation for the derivation of the new accelerated method in Section 3, one is required to look at the proof of the convergence in the supplement. Also here it would help to provide a detailed motivation for the derivations already in Section 3. In the numerical results in Figure 1, the gap | F ( x n ) − F ( x ∗ ) | (y-axis) does not seem to monotonically decrease but jump up and down erratically. Also there are periodic wave-like patterns visible in the plot. Why do we see those patterns? The resulting accelerated numerical method is never explicitly written down, only the specific version derived for the matrix completion problems. The paper would be better understandable if the general numerical scheme (accelerated method) was written down in form of an algorithm after Section 3. In the end of Section 4, a reference to Algorithm 2 is missing. Moreover, Figure 2 and Figure 3 are never referenced in the text. Page 2, after (1.2) -> "achive" -> "achieve" Conclusion: The proposed method provides a theoretical contribution to the understanding of Nesterov's accelerated gradient method. Moreover, a novel algorithm is proposed which is shown to have a faster convergence to the underlying ODE. In the paper this is shown only for a matrix completion problem but I feel that this new algorithm could be adopted by the community if further experiments prove its worth. On the other hand, the flow and presentation of the paper could be improved. Overall this is a borderline paper but its merits may outweigh its flaws.
ICLR
Title A new accelerated gradient method inspired by continuous-time perspective Abstract Nesterov’s accelerated method are widely used in problems with machine learning background including deep learning. To give more insight about the acceleration phenomenon, an ordinary differential equation was obtained from Nesterov’s accelerated method by taking step sizes approaching zero, and the relationship between Nesterov’s method and the differential equation is still of research interest. In this work, we give the precise order of the iterations of Nesterov’s accelerated method converging to the solution of derived differential equation as step sizes go to zero. We then present a new accelerated method with higher order. The new method is more stable than ordinary method for large step size and converges faster. We further apply the new method to matrix completion problem and show its better performance through numerical experiments. 1 Introduction Optimization is a core component of statistic and machine learning problems. Recently, gradientbased algorithms are widely used in such optimization problems due to its simplicity and efficiency for large-scale situations. For solving convex optimization problem min x∈Rd F (x), where F (x) is convex and sufficiently smooth, a classical first-order method is gradient descent. We assume that f(x) = ∇F (x) satisfies L-Lipschitz condition, that is, there exists constant L such that ∥f(x)− f(y)∥ ≤ L∥x− y∥, ∀x, y. Under these conditions, gradient descent achieves a convergence rate of O(n−1), i.e., ∥F (xn) − F (x∗)∥ decreases to zero at a rate of O(n−1), where xn denotes the nth iteration and x∗ denotes the minimum point of F (x) in Rd. Nesterov’s accelerated method (Nesterov, 1983) is a more efficient first-order algorithm than gradient descent, of which we will use the following form: starting with x0 = x1, yn = xn + n− 3 n (xn − xn−1), xn+1 = yn − sf(yn) (1.1) for n ≥ 1. It is shown that under abovementioned conditions, Nesterov’s accelerated method converges at a rate of O(n−2). Accelerated gradient method has been successful in training deep and recurrent neural networks (Sutskever et al., 2013) and is widely used in problems with machine learning background to avoid sophisticated second-order methods (Cotter et al., 2011; Hu et al., 2009; Ji & Ye, 2009). To provide more theorical understanding, an important research topic of Nesterov’s accelerated method is to find an explanation of the acceleration. On this topic, Nesterov’s method was studied via a continuous-time perspective (Su et al., 2014). They considered a curve x(t), introduced the ansatz xn ≈ x(n √ s) and substituted it to (1.1). Letting s → 0, they obtained the following differential equation. ẍ+ 3 t ẋ+ f(x) = 0. (1.2) The differential equation was used as a tool for analyzing and generalizing Nesterov’s scheme. Furthermore, this idea has been studied from different directions. A class of accelerated methods have been generated in continuous-time (Wibisono et al., 2016). ODE (1.2) can also be discretized directly using Runge-Kutta method to achieve acceleration (Zhang et al., 2018). Although many results have been achieved, the process of obtaining the differential equation (1.2) has not been rigorous, and the method is still time-consuming for large-scale problems. In this work, we give the precise order of the iterations of Nesterov’s accelerated method converging to solution of the differential equation (1.2) with initial conditions x(0) = x0, ẋ(0) = 0 (1.3) as step size s goes to zero. Inspired from this perspective, we present a new accelerated method to make this convergence faster. As we expected, iterations of the new method are closer to the solution x(t) of differential equation (1.2) than original Nesterov’s method. Moreover, we find the new method is more stable than original Nesterov’s method when step size is large. Based on abovementioned observations, we try to take advantage of the new method in more practical problems. We apply the new method to matrix completion problem. We combine the new method with proximal operator (Parikh & Boyd, 2014) into a new algorithm, which we call modified FISTA. We find that the new method performs better than FISTA (Beck & Teboulle, 2009) and acclerated proximal gradient method (Parikh & Boyd, 2014) because it can work with larger step sizes. This paper is organized as follows. In section 2, we prove that iterations of Nesterov’s accelerated method converge to solution of the differential equation (1.2). In section 3, we present a new method to make the convergence faster and show its better stablity through two simple examples. In section 4, we apply the new method to matrix completion problem. 2 A strict analysis of the relation between Nesterov’s method and its continuous-time limit We refer to x(t) as the solution of differential equation (1.2) with initial conditions (1.3). Existance and uniqueness of such solutions have been proved (Su et al., 2014). In this section, We give the order of the iterations of Nesterov’s accelerated method converging to x(t) as step sizes go to zero. For convenience, we substitute the first equation in Nesterov’s method (1.1) to the second one to get xn+1 = xn + n− 3 n (xn − xn−1)− s · f ( xn + n− 3 n (xn − xn−1) ) . We write s = h2 and rewrite the above recurrence relation as xn+1 = xn + n− 3 n (xn − xn−1)− h2 · f ( xn + n− 3 n (xn − xn−1) ) . (2.1) Inspired by the ansatz xn ≈ x(n √ s) (Su et al., 2014), we consider the convergence between xn and x(nh). More precisely, we show that for fixed time t, xn converges to x(t) as h goes to zero, where n = th . 2.1 Truncation error Firstly, we consider the following ‘truncation error’: L[x(t);h] =x(t+ h)− 2t− 3h t x(t) + t− 3h t x(t− h)+ h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) . (2.2) (2.2) is obtained from (2.1) by replacing xn+1, xn, xn−1 with x(t+h), x(t), x(t−h) and substituting the relation n = th . Our first result is the order of truncation error L[x(t);h]. Theorem 1. Assume f satisfies L-Lipschitz condition, and solution x(t) of the derived differential equation (1.2) has a continuous third derivative. For fixed time t, the truncation error (2.2) satisfies L[x(t);h] = O(h3). Theorem 1 shows the size of error caused by a single iteration when the starting point is just on x(t). Then we have to add up these errors to prove the convergence proporty we need. 2.2 Convergence theorem We now come to the convergence theorem. In this theorem, we give the precise order of the iterations of Nesterov’s method converging to solution of the derived differential equation. Theorem 2. Under conditions in Theorem 1, for fixed time t, xt/h converges to x(t) as h goes to zero at a rate of O(h ln 1h ) if x0 = x(0) and x1 = x(h). Theorem 2 coincides with derivation of ODE (1.2) (Su et al., 2014). 3 New accelerated method 3.1 Derivation of the new method and analysis of truncation error Inspired from the continuous-time perspective and our proof of the convergence from iterations of Nesterov’s method to its continuous-time limit, we present a new method to make this convergence faster. Precisely, the new method has a higher truncation order. We need one more step in our scheme than in Nesterov’s method to achieve higher truncation order in the following analysis, so we consider a recurrence relation with form 4∑ i=1 ( αi + βi n + γi n2 ) xn+2−i = −sf ( xn + n− 3 n (xn − xn−1) ) , (3.1) where {αi}, {βi} and {γi} are to be determined. Now we expand x(t− h) to first order. Calculation shows that f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =− hx(3)(t)− ( 3h t + 1 ) x(2)(t) + ( 3h t2 − 3 t ) x(1)(t) +O(h2). Substitute this expansion to truncation error L[x(t);h] = 4∑ i=1 ( αi + βih t + γih 2 t2 ) x(t+ (2− i)h) + h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) , and choose parameters appropriately to eliminate low-order terms, we get the following recurrence relation xn+1 = 10n2 + 9n+ 6 4n2 + 8n xn − 4n2 + 3 2n2 + 4n xn−1 + 2n− 1 4n+ 8 xn−2 − n 2n+ 4 sf ( 2n− 3 n xn − n− 3 n xn−1 ) . (3.2) Here we rewrite this scheme as Algorithm 1. Algorithm 1 The new method (3.2) Input: step size s Initial value: X2 = X1 = X0. (k− 1)th iteration (k ≥ 2). Compute Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1, Xk+1 = X − ( Yk − ks 2k + 4 f(Zk) ) . For truncation order of this new method, we have the following theorem. The abovementioned procedure is presented in Appendix A.4 detailedly, as proof of Theorem 3. Theorem 3. If f has continuous second order derivative, the first and second derivative are bounded, and x(t) has continuous fourth derivative, then for fixed t, truncation error of (3.2) satisfies L[x(tn);h] = O(h4). The convergence of the new method and x(t) can be proved similar to Theorem 2. 3.2 Advantage of the new method Since the new method has a truncation error of higher order than original Neaterov’s method, the iterations of the new method converge to the differential equation (1.2) when those of original Nesterov’s method diverge. In another word, the new method is more stable for large step size. We present two numerical results in Figure 1 to confirm it. Quadratic. F (x) = xTAx is a strongly convex function, in which x ∈ R2 and A is a 2× 2 matrix. Linear regression. F (x) = n∑ i=1 (yi −wTi x)2, where n is the number of samples and (wi, yi) is the ith sample. In these examples, iterations of the new method converge to the minimum point, while those of original Nesterov’s method diverge, which confirms that the new method is more stable for large step size. 3.3 Absolute stability of Nesterov’s method and the new method In this subsection, we explain the better stability of the new method with absolute stability theory. Firstly, recall the scheme of our new method xn+1 = 10n2 + 9n+ 6 4n2 + 8n xn − 4n2 + 3 2n2 + 4n xn−1 + 2n− 1 4n+ 8 xn−2 − n 2n+ 4 sf ( 2n− 3 n xn − n− 3 n xn−1 ) . We use linear approximation f ( xn + n− 3 n (xn − xn−1) ) = ∇F ( xn + n− 3 n (xn − xn−1) ) ≈ ∇2F · ( xn + n− 3 n (xn − xn−1) ) , and the characteristic equation of this finite scheme is approximately λ3− ( 10n2 + 9n+ 6 4n2 + 8n − s · ∇2F · 2n 2 − 3n 2n2 + 4n ) λ2+ ( 4n2 + 3 2n2 + 4n − s · ∇2F · n 2 − 3n 2n2 + 4n ) λ− 2n− 1 4n+ 8 = 0. For large n, we can ignore the high order terms and the characteristic equation becomes λ3 − ( 5 2 − s · ∇2F · ) λ2 + ( 2− s 2 · ∇2F ) λ− 1 2 = 0. According to the absolute stability theory, the numerical stability of Nesterov’s scheme with respect to accumulated roundoff error is equivalent to this: all the roots of the characteristic equation lie in the unit circle (Leader, 2004). Noticing that the left hand of the equation can be factorized to( λ− 1 2 )( λ2 − (2− s · ∇2F )λ+ 1 ) , the largest modulu of the roots is 1 when 0 ≤ s · ∇2F ≤ 4, and the absolutely stable region of the new method is s · ∇2F ∈ [0, 4]. When s·∇2F lies in the absoletely stable region, the related theory guarantees that the error caused by every iteration will not be magnified as the iteration number increases. To make the analysis more precise, we should consider the difference of the scheme between iterations caused by different n. We define the transfer matrix Pn = ( 10n2+9n+6 4n2+8n − s · ∇ 2F · 2n 2−3n 2n2+4n ) − ( 4n2+3 2n2+4n − s · ∇ 2F · n 2−3n 2n2+4n ) 2n−1 4n+8 1 0 0 0 1 0 and Qn = PnPn−1 · · ·P1. Error analysis shows that if the largest modulu of eigenvalues of Qn goes to zero, then error caused by iterations will be eliminated as the iteration number increases. Figure 2 presents the largest module of eigenvalues of Qn for different values of s · ∇2F . From the experiment we can see that the above condition is satisfied. We then apply the same method to Nesterov’s method discussed in (Su et al., 2014) and conclude that the absolutely stable region of Nesterov’s method is [0, 43 ]. According to the above analysis, the absolutely stable region of the new method is four times as large as Nesterov’s method, so the new method is more stable, and we can choose larger step sizes to achieve faster convergence. 4 Application to matrix completion problem: modified FISTA Our theory and numerical results show that the new method is more stable than original Nestrov’s method. So we can choose larger step size for new method and convergence to the optimal solution can be faster, compared with original Nesterov’s method. In this section we apply the new method to matrix completion problem. We present a new algorithm which can be viewed as a modification of the well-konwn fast iterative shrinkage-thresholding algorithm (FISTA) (Beck & Teboulle, 2009). The performance of modified FISTA can also confirm the advantage of the new method. For matrix completion problem there exists a ‘true’ low rank matrix M . We are given some entries of M and asked to fill missing entries. There have been various algorithms to solve such problem (Candès & Recht, 2009; Keshavan et al., 2010). Besides, it is proposed that matrix completion can be transformed to the following unconstrained optimization problem (Mazumder et al., 2010) minF (X) = 1 2 ∥Xobs −Mobs∥2 + λ∥X∥∗. (4.1) Notice that F (X) is composed of a smooth term and a non-smooth term, so gradient-based algorithms cannot be used directly. Proximal gradient algorithms (Parikh & Boyd, 2014) are widely used in such composite optimization problems, and fast iterative shrinkage-thresholding algorithm (FISTA) is a successful algorithm. Moreover, FISTA has been extended to matrix completion case (Ji & Ye, 2009). For convenience, we set G(X) = 12∥Xobs − Mobs∥2, H(X) = λ∥X∥∗, and g(X) = ∇G(X). The idea of FISTA builds on Nesterov’s method. We also apply acclerated proximal gradient method (Parikh & Boyd, 2014) for our numerical experiment, which is composed of Nesterov’s method and proximal gradient descent. These two algorithms are presented in Appendix A.5. We find the performances of them are similar in our experiments. Our contribution is the third method (Algorithm 2), the new method (3.2) combined with proximal operator, which we call modified FISTA. Algorithm 2 Modified FISTA Input: step size s Initial value: X2 = X1 = X0 ∈ M100. (k− 1)th iteration (k ≥ 2). Compute Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1, Xk+1 = argmin X { 1 2 · 2k + 4 ks ∥∥∥∥X − (Yk − ks2k + 4g(Zk) )∥∥∥∥2 + λ∥X∥∗ } . Notice that the minimizing problems in interations of above three algorithms can be solved directly by singular value decomposition (Cai & Candès, 2010). We take experiments on a simulated data set. We use fixed step sizes in the above three algorithms, and the performances are presented in Figure 3. We find empirically that for all methods, convergence is faster when step size is larger, so we choose the largest step sizes for all methods to compare their fastest convergence speed. Through experiments, we find the largest step size that makes modified FISTA convergent is 4.1 (accurate to one decimal place), while those for the first two algorithms are both 1.3. We also compare performances of the three methods with step sizes reduced from the largest in equal proportion. We find that when step sizes are chosen to be the largest or reduced from the largest in equal proportion (80%, 50%, 10%), convergence of modified FISTA is faster than the other two methods. We also combine the three methods with backtracking (Beck & Teboulle, 2009) to choose step sizes automatically. We present modified FISTA with backtracking as Algorithm 3, and the other two algorithms are similar. Performances of the three algorithms with backtracking on abovementioned data set are presented in Figure 4. Convergence of modified FISTA is faster than the other two methods. Moreover, we find that the final step size of modified FISTA is larger. 5 Discussion In this paper we prove that iterations of Nesterov’s accelerated method converge to solution of the derived differential equation as step sizes go to zero. We present a new accelerated method to make this convergence faster. We use numerical results to show that the new method is more stable, especially for large step sizes, and explan it using the order of truncation error. We then apply the new method to matrix completion problem and present a new algorithm which we call modified FISTA. Numerical experiments show that modified FISTA performs better than existing algorithms based on Nesterov’s acceleration because it can work with larger step sizes. We will also combine our new method with stochastic gradient-based algorithms and apply the new method to deep networks in the future. Algorithm 3 Modified FISTA with backtracking Input: some β < 1 Initial value. X2 = X1 = X0 ∈ M100, step size s2. (k− 1)th iteration (k ≥ 2). Yk = 10k2 + 9k + 6 4k2 + 8k Xk − 4k2 + 3 2k2 + 4k Xk−1 + 2k − 1 4k + 8 Xk−2, Zk = 2k − 3 k Xk − k − 3 k Xk−1. Find the smallest positive integer ik+1 such that with s = βik+1sk F (X̃) < F (Yk) + ⟨ X̃ − Yk, g(Zk) ⟩ + 1 2 · 2k + 4 ks ∥X̃ − Yk∥2, where X̃ = argmin X { 1 2 · 2k + 4 ks ∥∥∥∥X − (Yk − ks2k + 4g(Zk) )∥∥∥∥2 + λ∥X∥∗ } . Set sk+1 = βik+1sk and compute Xk+1 = X̃. Our work shows that for an accelerated gradient method, the rate at which it converges to the derived differential equation is possibly related to its property as an optimization algorithm. We think this work suggests that more consideration should be given to the corresponding differential equations when studying optimization algorithms. A Appendix A.1 Proof of Theorem 1 Theorem 1. Assume f satisfies L-Lipschitz condition, and solution x(t) of the derived differential equation (1.2) has a continuous third derivative. For fixed time t, the truncation error (2.2) satisfies L[x(t);h] = O(h3). (A.1) Proof. Notice that x(t− h) = x(t) +O(h). Substitute this equation to the last term of L[x(t);h] to get f ( x(t) + t− 3h t (x(t)− x(t− h)) ) = f ( x(t) + t− 3h t · O(h) ) . Since f satisfies L-Lipschitz condition, we know f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =f(x(t)) +O(h) =− ẍ(t)− 3 t ẋ(t) +O(h). To get the second equality, we substitute the differential equation (1.2). Then we expend the first and third terms of L[x(t);h] to third order to get x(t+ h) = x(t) + hx(1)(t) + h2 2 x(2)(t) +O(h3), x(t− h) = x(t)− hx(1)(t) + h 2 2 x(2)(t) +O(h3). Substitute these three equations to (2.2), we have L[x(t);h] = O(h3). Remark 1. (A.1) can also be written as |L[x(t);h]| ≤ M1h3, where M1 depends on sups≤t |x(1)(s)| and sups≤t |x(3)(s)|. Remark 2. Theorem 1 deals with the problem for fixed time t. To finish the proof of the convergence, we have to consider the situation that tn = nh, where n ≥ 1 is fixed. We set a fixed time t0 and assume that tn = nh < t0. Since x(t) has a continuous third derivative, x(t) and its first to third derivative are bounded in [0, t0]. We replace time t in the above proof by tn and expend the terms of (2.2). Now the term −3h 3 2tn x(2)(tn) obtained from the expansion of x(tn−1) cannot be viewed as O(h3), but there exists M2 > 0 such that ∣∣∣∣−3h32tn x(2)(tn) ∣∣∣∣ ≤ M2h2n . As a consequence, we have |L[x(tn);h]| ≤ M1h3 +M2 h2 n , (A.2) where M1 and M2 rely on t0. A.2 Two lemmas for Theorem 2 For the proof of Theorem 2, we need the following two lemmas. Lemma 1. (Holte, 2009) For constant α, β > 0 and positive sequence {ηn}n≥0 satisfying ηn ≤ β + α n−1∑ i=0 ηi, ∀n > 0, the following inequality holds ηn ≤ eαn(β + αη0). The above lemma is a classic result and refered to as discrete Gronwall inequality. Lemma 2. We define matrices Cn and Dn,l as Cn = ( 2n−1 n+1 − n−2 n+1 1 0 ) , Dn,l = CnCn−1 · · ·Cn−l+1, where n ≥ 0 and 0 < l ≤ n+ 1. In addition, we set Dn,0 = I2. Then there exist positive constants M, M3 such that for all n, the following two inequalities hold, where the matrix norm is 2-norm. sup 0≤l≤n+1 ∥Dn,l∥ ≤ Mn, Dn,n+1 ≤ M3. (A.3) Proof. Since C2 = ( 1 0 1 0 ) , we notice that when n ≥ 2, Dn,n−1 = ( 1 0 1 0 ) , Dn,n = ( 1 2 1 2 1 2 1 2 ) , Dn,n+1 = ( 0 1 0 1 ) , having nothing to do with the value of n. So it is obvious that there exists M3 to make (A.3) holds and M4 > 0 such that for all n < 2 or n ≥ 2, l > n− 2 or l = 0, ∥Dn,l∥ ≤ M4n. (A.4) Then we consider the condition when n ≥ 2, 0 < l ≤ n− 2. Notice that Cn = ( 1 1 1 0 )( 1 1 0 n−2n+1 )( 1 1 1 0 )−1 . For convenience, we write P = ( 1 1 1 0 ) . Assume we have alreagy got Dn,l = P ( 1 an,l 0 bn,l ) P−1 satisfying 0 < an,l ≤ l, 0 < bn,l ≤ 1, then since Dn,l+1 = Dn,lCn−l, and 0 ≤ n−l−2n−l+1 < 1, Dn,l+1 has the same form Dn,l+1 = P ( 1 an,l+1 0 bn,l+1 ) P−1, satisfying 0 < an,l+1 ≤ l + 1, 0 < bn,l ≤ 1. Then for fixed n, induce from l = 1, we get Dn,l = PD̃n,lP −1 ≜ P ( 1 an,l 0 bn,l ) P−1, satisfying 0 < an,l ≤ l ≤ n, 0 < bn,l ≤ 1, (A.5) for all n ≥ 2, 0 < l ≤ n− 2. Then we can estimate ∥Dn,l∥. Notice that D̃n,lD̃ T n,l = ( 1 + a2n,l an,lbn,l an,lbn,l a 2 n,l ) . The eigenvalues of this matrix are λ1,2 = 1 + a2n,l + b 2 n,l ± √ (1 + a2n,l + b 2 n,l) 2 − 4b4 2 . Combining this representation with (A.5), we get the estimation ∥D̃n,l∥ = √ max{|λ1|, |λ2|} ≤ √ 1 + a2n,l + b 2 n,l ≤ n+ 2. So there exists M5 > 0, such that for all n ≥ 2, 0 < l ≤ n− 2, inequality ∥Dn,l∥ ≤ M5n (A.6) holds. Combining (A.4) with (A.6), we finish the proof. A.3 Proof of Theorem 2 Theorem 2. Under conditions in Theorem 1, for fixed time t, xt/h converges to x(t) as h goes to zero at a rate of O(h ln 1h ) if x0 = x(0) and x1 = x(h). Proof. In this proof, we first calculate the error caused by a single iteration, which can be divided into an accumulation term and a truncation term. Then we use the estimation given by Theorem 1 and apply discrete Gronwall inequality to prove the convergence. Recall the recurrence relation xn+1 = xn + n− 3 n (xn − xn−1)− h2 · f ( xn + n− 3 n (xn − xn−1) ) and the definition of truncation error x(tn+1) = x(tn) + n− 3 n (x(tn)− x(tn−1))− h2f ( x(tn) + n− 3 n (x(tn)− x(tn−1)) ) + L[x(tn);h], where tn = nh. Subtract the above two equations, and introduce overall error en = x(tn)− xn, we have en+1 = 2n− 3 n en − n− 3 n en−1 − h2bn−1 + L[x(tn);h], which can also be written as en+2 − 2n− 1 n+ 1 en+1 + n− 2 n+ 1 en = −h2bn + L[x(tn+1);h], (A.7) where bn = f ( 2n− 1 n+ 1 xn+1 − n− 2 n+ 1 xn ) − f ( 2n− 1 n+ 1 x(tn+1)− n− 2 n+ 1 x(tn) ) . (A.8) We will also use the notation b∗n = − en+2 − 2n−1n+1 en+1 + n−2 n+1en h2 . Then we rewrite (A.7) into a form that is convenient for recurrence. We set En = ( en+1 en ) , Cn = ( 2n−1 n+1 − n−2 n+1 1 0 ) , Bn = ( −h2b∗n 0 ) . Then (A.7) can be written as En+1 = CnEn +Bn. By recursive method, we have En = Cn−1 · · ·C0E0 + n∑ l=1 Cn−1 · · ·Cn−l+1Bn−l. With the notations introduced in Lemma 2, this equation can be written as En = Dn−1,nE0 + n∑ l=1 Dn−1,l−1Bn−l. (A.9) Now we need to estimate ∥Bn∥. Since f satisfies L-Lipschitz condition, from (A.8) we have |bn| ≤ L ( 2n− 1 n+ 1 |en+1|+ n− 2 n+ 1 |en| ) ≤ L (2|en+1|+ |en|) ≤ 3L∥En∥. and ∥Bn∥ ≤ 3h2L∥En∥+ L[x(tn+1);h]. (A.10) Take norm on both sides of (A.9) and substitute (A.10) and conclusion of Lemma 2, we have the following estimation ∥En∥ ≤ M3∥E0∥+M(n− 1) n−1∑ l=0 ( 3h2L∥El∥+ L[x(tl+1);h] ) ≤ M3∥E0∥+ 3Mnh2L n−1∑ l=0 ∥El∥+Mn n−1∑ l=0 L[x(tl+1);h]. (A.11) Now we deal with truncation errors. Recall (A.2) in remark of Theorem 1 |L[x(tl);h]| ≤ M1h3 +M2 h2 l . Take sum to obtain n−1∑ l=0 |L[x(tl+1);h]| ≤ nM1h3 +M2h2 n−1∑ l=0 1 l + 1 . (A.12) Notice the classic inequality n∑ i=1 1 i ≤ lnn+Me, where Me refers to a positive constant. Substitute it to (A.12), we have n−1∑ l=0 |L[x(tl+1);h]| ≤ nM1h3 +M2h2(lnn+Me). Substitute this inequality to (A.11), we get a control of ∥En∥ ∥En∥ ≤ M3∥E0∥+ 3Mnh2L n−1∑ l=0 ∥El∥+MM1n2h3 +MM2Menh2 +MM2nh2 lnn Using discrete Gronwall inequality, we have ∥En∥ ≤ e3Mn 2h2L ( M3∥E0∥+MM1n2h3 +MM2Menh2 +MM2nh2 lnn+ 3Mnh2L∥E0∥ ) . Then for fixed t, we choose n = th to get ∥Et/h∥ ≤ e3Mt 2L ( (M3 + 3MthL)∥E0∥+ (MM1t2 +MM2Met)h+MM2th ln t h ) . Notice that lim h→0 h ln t h = 0, so if E0 = 0, then the vector form of overall error Et/h satisfies lim h→0 ∥Et/h∥ = 0. A.4 Proof of Theorem 3 Theorem 3. If f has continuous second order derivative, the first and second derivative are bounded, and x(t) has continuous fourth derivative, then for fixed t, truncation error of (3.2) satisfies L[x(t);h] = O(h4). Proof. Recall the proof of Throrem 1. Now we expand x(t− h) to first order x(t− h) = x(t) + hx(1)(t) +O(h2). Then we have f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =f ( x(t) + ( 1− 3h t ) (hx(1)(t) +O(h2)) ) =f ( x(t) + hx(1)(t) +O(h2) ) =f ( x(t) + hx(1)(t) ) +O(h2). We now expand f : f ( x(t) + t− 3h t (x(t)− x(t− h)) ) = f(x(t)) + hx(1)(t)f (1)(x(t)) +O(h2). To do this, we need f has continuous second derivative and the second derivative is bounded. Take derivetive on both sides of differential equation ẍ+ 3 t ẋ+ f(x) = 0, we have (f(x(t))) ′ = −x(3)(t)− 3 t x(2)(t) + 3 t2 x(1)(t). So f ( x(t) + t− 3h t (x(t)− x(t− h)) ) =− hx(3)(t)− ( 3h t + 1 ) x(2)(t) + ( 3h t2 − 3 t ) x(1)(t) +O(h2). (A.13) Expand x(t+ h), x(t− h), x(t− 2h) to the third order, we have( α1 + β1h t + γ1h 2 t2 ) x(t+ h) = ( α1 + β1h t + γ1h 2 t2 ) [ x(t) + hx(1)(t) + h2 2 x(2)(t) + h3 6 x(3)(t) +O(h4) ] ,( α3 + β3h t + γ3h 2 t2 ) x(t− h) = ( α3 + β3h t + γ3h 2 t2 ) [ x(t)− hx(1)(t) + h 2 2 x(2)(t)− h 3 6 x(3)(t) +O(h4) ] ,( α4 + β4h t + γ4h 2 t2 ) x(t− 2h) = ( α4 + β4h t + γ4h 2 t2 ) [ x(t)− 2hx(1)(t) + 2h2x(2)(t)− 4h 3 3 x(3)(t) +O(h4) ] . Substitute these three equations and (A.13) to truncation error of recurrence relation (3.1) L[x(t);h] = 4∑ i=1 ( αi + βih t + γih 2 t2 ) x(t+ (2− i)h) + h2f ( x(t) + t− 3h t (x(t)− x(t− h)) ) , then simple calculation shows that terms with order less than four will be eliminated if we choose coefficients according to the following equations α1 = 2 α2 = −5 α3 = 4 α4 = −1 , β1 = 9 2 − k β2 = −6 + 3k β3 = 3 2 − 3k β4 = k , γ1 = m1 γ2 = − 3m1 +m2 + 3 2 γ3 = m2 γ4 = m1 −m2 + 3 2 , where k, m1, m2 can be chosen randomly. Notice that coefficients of recurrence relation (3.2) satisfy above equations. A.5 Algorithms Algorithm 4 FISTA Input: step size s Initial value: Y1 = X0 ∈ M100, t1 = 1. kth iteration (k ≥ 1). Compute Xk = argmin X { 1 2s ∥X − (Yk − sg(Yk))∥2 + λ∥X∥∗ } , tk+1 = 1 + √ 1 + 4t2k 2 , Yk+1 = Xk + tk − 1 tk+1 (Xk −Xk−1). Algorithm 5 Accelerated proximal gradient method Input: step size s Initial value: X1 = X0 ∈ M100. kth iteration (k ≥ 1). Compute Yk = Xk−1 + k − 3 k (Xk−1 −Xk−2), Xk+1 = argmin X { 1 2s ∥X − (Yk − sg(Yk))∥2 + λ∥X∥∗ } . A.6 Details about Numerical Experiments in Section 4 Here we produce some details for our numerical experiments in Section 4. Our experiments are taken on a simulated data set. Firstly, we generated the ‘true’ low rank matrix M . To do this, we generate a random matrix M0. Entries of M0 are independent and uniformly distributed on (0, 20). Then we compute the singular value decomposition of M0, that is, M0 = UΣV T. After that, we set M = UΣ0V T, where Σ0 is a diagonal matrix with only three nonzero diagonal elements. It is not difficult to prove that M has rank 3. Secondly, we generate the observation set. For every row of M , we choose randomly ten entrys to be observed. As a consequence, 10% entries are observed in total. After data generation step, we apply the abovementioned algorithms (accelerated proximal gradient method, FISTA and our modified FISTA) with fixed step sizes and backtracking to this data set. The parameter of the loss function (4.1) is λ = 0.005. For initial point, we simply choose the zero matrix (every entry equals to zero). For backtracking, we set the initial step size as 10 and the decay factor β = 0.1.
1. What are the authors' main contributions to accelerated gradient method? 2. How does the proposed algorithm improve upon previous schemes? 3. Are there any theoretical or empirical evidence supporting the authors' claims? 4. Do you find the provided experiments convincing? 5. What are your suggestions for improving the paper?
Review
Review In this paper the authors study a version of accelerated gradient method. Inspired by the ODE analysis of Nesterov accelerated gradient method by Su et.al., the authors propose a different discretization of the ODE by Su et al. The truncation order of this scheme is of a higher order, thus the authors claim that the proposed algorithm is more stable and, therefore, will converge with larger steps. Unfortunately, I found these statements to be vague. Apart from the above-mentioned truncation error, the only evidence we have is some simple 2 -dimensional experiment. I believe it is not sufficient. Second, for a new scheme convergence of iterates ( x n ) to a solution and the convergence rate F ( x n ) − F ( x ∗ ) should be proven explicitly, they do not follow automatically. Ideally, we need both sound theory and good experiments to claim that one method is better than another. I am afraid both are missing in this work. The same was done for the modified version of FISTA, where the authors add regularizer without any discussion about convergence of the scheme. Based on this, I cannot recommend this paper. I suggest the authors to address the above-mentioned concerns in their revision. I think it would be great if one can show directly the connection between the discretization truncation error and better algorithm performance. Note that, however, already Nesterov's methods have optimal performance. Probably, a significant experimental evidence will help here.
ICLR
Title On orthogonality and learning recurrent networks with long term dependencies Abstract It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we can bound matrix norms and therein control the degree of expansivity induced during backpropagation. 1 INTRODUCTION The depth of deep neural networks confers representational power, but also makes model optimization more challenging. Training deep networks with gradient descent based methods is known to be difficult as a consequence of the vanishing and exploding gradient problem (Hochreiter & Schmidhuber, 1997). Typically, exploding gradients are avoided by clipping large gradients (Pascanu et al., 2013) or introducing an L2 or L1 weight norm penalty. The latter has the effect of bounding the spectral radius of the linear transformations, thus limiting the maximal gain across the transformation. Krueger & Memisevic (2015) attempt to stabilize the norm of propagating signals directly by penalizing differences in successive norm pairs in the forward pass and Pascanu et al. (2013) propose to penalize successive gradient norm pairs in the backward pass. These regularizers affect the network parameterization with respect to the data instead of penalizing weights directly. Both expansivity and contractivity of linear transformations can also be limited by more tightly bounding their spectra. By limiting the transformations to be orthogonal, their singular spectra are limited to unitary gain causing the transformations to be norm-preserving. Le et al. (2015) and Henaff et al. (2016) have respectively shown that identity initialization and orthogonal initialization can be beneficial. Arjovsky et al. (2015) have gone beyond initialization, building unitary recurrent neural network (RNN) models with transformations that are unitary by construction which they achieved by composing multiple basic unitary transformations. The resulting transformations, for some n-dimensional input, cover only some subset of possible n × n unitary matrices but appear to perform well on simple tasks and have the benefit of having low complexity in memory and computation. The entire set of possible unitary or orthogonal parameterizations forms the Stiefel manifold. At a much higher computational cost, gradient descent optimization directly along this manifold can be done via geodesic steps (Nishimori, 2005; Tagare, 2011). Recent work (Wisdom et al., 2016) has proposed the optimization of unitary matrices along the Stiefel manifold using geodesic gradient descent. To produce a full-capacity parameterization for unitary matrices they use some insights from Tagare (2011), combining the use of a canonical inner products and Cayley transformations. Their experimental work indicates that full capacity unitary RNN models can solve the copy memory problem whereas both LSTM networks and restricted capacity unitary RNN models having similar complexity appear unable to solve the task for a longer sequence length (T = 2000). In contrast, here we explore the optimization of real valued matrices within a configurable margin about the Stiefel manifold. We suspect that a strong constraint of orthogonality limits the model’s representational power, hindering its performance, and may make optimization more difficult. We explore this hypothesis empirically by employing a factorization technique that allows us to limit the degree of deviation from the Stiefel manifold. While we use geodesic gradient descent, we simultaneously update the singular spectra of our matrices along Euclidean steps, allowing optimization to step away from the manifold while still curving about it. 1.1 VANISHING AND EXPLODING GRADIENTS The issue of vanishing and exploding gradients as it pertains to the parameterization of neural networks can be illuminated by looking at the gradient back-propagation chain through a network. A neural network with n hidden layers has pre-activations ai(hi−1) = Wi hi−1 + bi, i ∈ {2, · · · , n} (1) For notational convenience, we combine parameters Wi and bi to form an affine matrix θ. We can see that for some loss function L at layer n , the derivative with respect to parameters θi is: ∂L ∂θi = ∂an+1 ∂θi ∂L ∂an+1 (2) The partial derivatives for the pre-activations can be decomposed as follows: ∂ai+1 ∂θi = ∂ai ∂θi ∂hi ∂ai ∂ai+1 ∂hi = ∂ai ∂θi DiWi+1 → ∂ai+1 ∂ai = DiWi+1, (3) where Di is the Jacobian corresponding to the activation function, containing partial derivatives of the hidden units at layer i + 1 with respect to the pre-activation inputs. Typically, D is diagonal. Following the above, the gradient in equation 2 can be fully decomposed into a recursive chain of matrix products: ∂L ∂θi = ∂ai ∂θi n∏ j=i (DjWj+1) ∂L ∂an+1 (4) In (Pascanu et al., 2013), it is shown that the 2-norm of ∂ai+1 ∂ai is bounded by the product of the norms of the non-linearity’s Jacobian and transition matrix at time t (layer i ), as follows:∣∣∣∣∣∣∣∣∂at+1∂at ∣∣∣∣∣∣∣∣ ≤ ||Dt|| ||Wt|| ≤ λDt λWt = ηt, λDt , λWt ∈ R. (5) where λDt and λWt are the largest singular values of the non-linearity’s Jacobian Dt and the transition matrix Wt . In RNNs, Wt is shared across time and can be simply denoted as W. Equation 5 shows that the gradient can grow or shrink at each layer depending on the gain of each layer’s linear transformation W and the gain of the Jacobian D. The gain caused by each layer is magnified across all time steps or layers. It is easy to have extreme amplification in a recurrent neural network where W is shared across time steps and a non-unitary gain in W is amplified exponentially. The phenomena of extreme growth or contraction of the gradient across time steps or layers are known as the exploding and the vanishing gradient problems, respectively. It is sufficient for RNNs to have ηt ≤ 1 at each time t to enable the possibility of vanishing gradients, typically for some large number of time steps T . The rate at which a gradient (or forward signal) vanishes depends on both the parameterization of the model and on the input data. The parameterization may be conditioned by placing appropriate constraints on W. It is worth keeping in mind that the Jacobian D is typically contractive, thus tending to be norm-reducing) and is also data-dependent, whereas W can vary from being contractive to norm-preserving, to expansive and applies the same gain on the forward signal as on the back-propagated gradient signal. 2 OUR APPROACH Vanishing and exploding gradients can be controlled to a large extent by controlling the maximum and minimum gain of W. The maximum gain of a matrix W is given by the spectral norm which is given by ||W||2 = max [ ||Wx|| ||x|| ] . (6) By keeping our weight matrix W close to orthogonal, one can ensure that it is close to a normpreserving transformation (where the spectral norm is equal to one, but the minimum gain is also one). One way to achieve this is via a simple soft constraint or regularization term of the form: λ ∑ i ||WTi Wi − I||2. (7) However, it is possible to formulate a more direct parameterization or factorization for W which permits hard bounds on the amount of expansion and contraction induced by W. This can be achieved by simply parameterizing W according to its singular value decomposition, which consists of the composition of orthogonal basis matrices U and V with a diagonal spectral matrix S containing the singular values which are real and positive by definition. We have W = USVT . (8) Since the spectral norm or maximum gain of a matrix is equal to its largest singular value, this decomposition allows us to control the maximum gain or expansivity of the weight matrix by controlling the magnitude of the largest singular value. Similarly, the minimum gain or contractivity of a matrix can be obtained from the minimum singular value. We can keep the bases U and V orthogonal via geodesic gradient descent along the set of weights that satisfy UTU = I and VTV = I respectively. The submanifolds that satisfy these constraints are called Stiefel manifolds. We discuss how this is achieved in more detail below, then discuss our construction for bounding the singular values. During optimization, in order to maintain the orthogonality of an orthogonally-initialized matrix M, i.e. where M = U, M = V or M = W if so desired, we employ a Cayley transformation of the update step onto the Stiefel manifold of (semi-)orthogonal matrices, as in Nishimori (2005) and Tagare (2011). Given an orthogonally-initialized parameter matrix M and its Jacobian, G with respect to the objective function, an update is performed as follows: A = GMT −MGT Mnew = M+ (I+ η 2 A)−1(I− η 2 A), (9) where A is a skew-symmetric matrix (that depends on the Jacobian and on the parameter matrix) which is mapped to an orthogonal matrix via a Cayley transform and η is the learning rate. While the update rule in (9) allows us to maintain an orthogonal hidden to hidden transition matrix W if desired, we are interested in exploring the effect of stepping away from the Stiefel manifold. As such, we parameterize the transition matrix W in factorized form, as a singular value decomposition with orthogonal bases U and V updated by geodesic gradient descent using the Cayley transform approach above. If W is an orthogonal matrix, the singular values in the diagonal matrix S are all equal to one. However, in our formulation we allow these singular values to deviate from one and employ a sigmoidal parameterization to apply a hard constraint on the maximum and minimum amount of deviation. Specifically, we define a margin m around 1 within which the singular values must lie. This is achieved with the parameterization si = 2m(σ(pi)− 0.5) + 1, si ∈ {diag(S)}, m ∈ [0, 1]. (10) The singular values are thus restricted to the range [1−m, 1 +m] and the underlying parameters pi are updated freely via stochastic gradient descent. Note that this parameterization strategy also has implications on the step sizes that gradient descent based optimization will take when updating the singular values – they tend to be smaller compared to models with no margin constraining their values. Specifically, a singular value’s progression toward a margin is slowed the closer it is to the margin. The sigmoidal parameterization can also impart another effect on the step size along the spectrum which needs to be accounted for. Considering 10, the gradient backpropagation of some loss L toward parameters pi is found as dL dpi = dsi dpi dL dsi = 2m dσ(pi) dpi dL dsi . (11) From (11), it can be seen that the magnitude of the update step for pi is scaled by the margin hyperparameter m . This means for example that for margins less than one, the effective learning rate for the spectrum is reduced in proportion to the margin. Consequently, we adjust the learning rate along the spectrum to be independent of the margin by renormalizing it by 2m . This margin formulation both guarantees singular values lie within a well defined range and slows deviation from orthogonality. Alternatively, one could enforce the orthogonality of U and V and impose a regularization term corresponding to a mean one Gaussian prior on these singular values. This encourages the weight matrix W to be norm preserving with a controllable strength equivalent to the variance of the Gaussian. We also explore this approach further below. 3 EXPERIMENTS In this section, we explore hard and soft orthogonality constraints on factorized weight matrices for recurrent neural network hidden to hidden transitions. With hard orthogonality constraints on U and V, we investigate the effect of widening the spectral margin or bounds on convergence and performance. Loosening these bounds allows increasingly larger margins within which the transition matrix W can deviate from orthogonality. We confirm that orthogonal initialization is useful as noted in Henaff et al. (2016), and we show that although strict orthogonality guarantees stable gradient norm, loosening orthogonality constraints can increase the rate of gradient descent convergence. We begin our analyses on tasks that are designed to stress memory: a sequence copying task and a basic addition task (Hochreiter & Schmidhuber, 1997). We then move on to tasks on real data that require models to capture long-range dependencies: digit classification based on sequential and permuted MNIST vectors (Le et al., 2015; LeCun et al., 1998). Finally, we look at a basic language modeling task using the Penn Treebank dataset (Marcus et al., 1993). The copy and adding tasks, introduced by Hochreiter & Schmidhuber (1997), are synthetic benchmarks with pathologically hard long distance dependencies that require long-term memory in models. The copy task consists of an input sequence that must be remembered by the network, followed by a series of blank inputs terminated by a delimiter that denotes the point at which the network must begin to output a copy of the initial sequence. We use an input sequence of T + 20 elements that begins with a sub-sequence of 10 elements to copy, each containing a symbol ai ∈ {a1 , ..., ap} out of p = 8 possible symbols. This sub-sequence is followed by T − 1 elements of the blank category a0 which is terminated at step T by a delimiter symbol ap+1 and 10 more elements of the blank category. The network must learn to remember the initial 10 element sequence for T time steps and output it after receiving the delimiter symbol. The goal of the adding task is to add two numbers together after a long delay. Each number is randomly picked at a unique position in a sequence of length T . The sequence is composed of T values sampled from a uniform distribution in the range [0, 1), with each value paired with an indicator value that identifies the value as one of the two numbers to remember (marked 1) or as a value to ignore (marked 0). The two numbers are positioned randomly in the sequence, the first in the range [0, T2 − 1] and the second in the range [ T 2 , T − 1], where 0 marks the first element. The network must learn to identify and remember the two numbers and output their sum. The sequential MNIST task from Le et al. (2015), MNIST digits are flattened into vectors that can be traversed sequentially by a recurrent neural network. The goal is to classify the digit based on the sequential input of pixels. The simple variant of this task is with a simple flattening of the image matrices; the harder variant of this task includes a random permutation of the pixels in the input vector that is determined once for an experiment. The latter formulation introduces longer distance dependencies between pixels that must be interpreted by the classification model. The English Penn Treebank (PTB) dataset from Marcus et al. (1993) is an annotated corpus of English sentences, commonly used for benchmarking language models. We employ a sequential character prediction task: given a sentence, a recurrent neural network must predict the next character at each step, from left to right. We use input sequences of variable length, with each sequence containing one sentence. We model 49 characters including lowercase letters (all strings are in lowercase), numbers, common punctuation, and an unknown character placeholder. In our experiments on two subsets of the data: in the first, we first use 23% of the data with strings with up to 75 characters and in the second we include over 99% of the dataset, picking strings with up to 300 characters. 3.1 LOOSENING HARD ORTHOGONALITY CONSTRAINTS In this section, we experimentally explore the effect of loosening hard orthogonality constraints through loosening the spectral margin defined above for the hidden to hidden transition matrix. In all experiments, we employed RMSprop (Tieleman & Hinton, 2012) when not using geodesic gradient descent. We used minibatches of size 50 and for generated data (the copy and adding tasks), we assumed an epoch length of 100 minibatches. We cautiously introduced gradient clipping at magnitude 100 (unless stated otherwise) in all of our RNN experiments although it may not be required and we consistently applied a small weight decay of 0.0001. Unless otherwise specified, we trained all simple recurrent neural networks with the hidden to hidden matrix factorization as in (8) using geodesic gradient descent on the bases (learning rate 10−6) and RMSprop on the other parameters (learning rate 0.0001), using a tanh transition nonlinearity, and clipping gradients of 100 magnitude. The neural network code was built on the Theano framework (Theano Development Team, 2016). When parameterizing a matrix in factorized form, we apply the weight decay on the composite matrix rather than on the factors in order to be consistent across experiments. For MNIST and PTB, test set metrics were computed based on the parameterization that gave the best validation set accuracy. 3.1.1 CONVERGENCE ON SYNTHETIC MEMORY TASKS For different sequence lengths T of the copy and adding tasks, we trained a factorized RNN with 128 hidden units and various spectral margins m . For the copy task, we used Elman networks without a transition non-linearity as in Henaff et al. (2016). We discuss our investigations into the use of a non-linearity on the copy task in the Appendix. As shown in Figure 1 we see an increase in the rate of convergence as we increase the spectral margin. This observation generally holds across the tested sequence lengths (T = 200, T = 500, T = 1000, T = 10000); however, large spectral margins hinder convergence on extremely long sequence lengths. At sequence length T = 10000, parameterizations with spectral margins larger than 0.001 converge slower than when using a margin of 0.001. In addition, the experiment without a margin failed to converge on the longest sequence length. This follows the expected pattern where stepping away from the Stiefel manifold may help with gradient descent optimization but loosening orthogonality constraints can reduce the stability of signal propagation through the network. For the adding task, we trained a factorized RNN on T = 1000 length sequences, using a ReLU activation function on the hidden to hidden transition matrix. The mean squared error (MSE) is shown for different spectral margins in Figure 5 in the Appendix. Testing spectral margins m = 0, m = 1, m = 10, m = 100, and no margin, we find that the models with the purely orthogonal (m = 0) and the unconstrained (no margin) transition matrices failed to begin converging beyond baseline MSE within 2000 epochs. 3.1.2 PERFORMANCE ON REAL DATA Having confirmed that an orthogonality constraint can negatively impact convergence rate, we seek to investigate the effect on model performance for tasks on real data. We show the results of experiments on permuted sequential MNIST in Table 2 and ordered sequential MNIST in Table 1. The loss curves are shown in Figure 6 in the Appendix and reveal an increased convergence rate for larger spectral margins. We trained the factorized RNN models with 128 hidden units for 120 epochs. We also trained an LSTM with 128 hidden units on both tasks for 150 epochs, configured with peephole connections, orthogonally initialized (and forget gate bias initialized to one), and trained with RMSprop (learning rate 0.0001, clipping gradients of magnitude 1). We show the results of experiments on PTB character prediction, in terms of bits per character (bpc) and prediction accuracy, for a subset of short sequences (up to 75 characters; 23% of data) in Table 3 and for a subset of long sequences (up to 300 characters; 99% of data) in Table 4. We trained factorized RNN models with 512 hidden units for 200 epochs with geodesic gradient descent on the bases (learning rate 10−6) and RMSprop on the other parameters (learning rate 0.001), using a tanh transition nonlinearity, and clipping gradients of 30 magnitude. Interestingly, for both the ordered and permuted sequential MNIST tasks, models with a non-zero margin significantly outperform those that are constrained to have purely orthogonal transition matri- ces (margin of zero). The best results on both the ordered and sequential MNIST tasks were yielded by models with a spectral margin of 0.1, at 94.10% accuracy and 91.44% accuracy, respectively. An LSTM outperformed the RNNs in both tasks; nevertheless, RNNs with hidden to hidden transitions initialized as orthogonal matrices performed admirably without a memory component and without all of the additional parameters associated with gates. Indeed, orthogonally initialized RNNs performed almost on par with the LSTM in the permuted sequential MNIST task which presents longer distance dependencies than the ordered task. Although the optimal margin appears to be 0.1, RNNs with large margins perform almost identically to an RNN without a margin, as long as the transition matrix is initialized as orthogonal. On these tasks, orthogonal initialization appears to significantly outperform Glorot normal initialization (Glorot & Bengio, 2010) or initializing the matrix as identity. It is interesting to note that for the MNIST tasks, orthogonal initialization appears useful while orthogonality constraints appear mainly detrimental. This suggests that while orthogonality helps early training by stabilizing gradient flow across many time steps, orthogonality constraints may need to be loosened on some tasks so as not to over-constrain the model’s representational ability. Curiously, larger margins and even models without sigmoidal constraints on the spectrum (no margin) performed well as long as they were initialized to be orthogonal, suggesting that evolution away from orthogonality is not a serious problem on MNIST. It is not surprising that orthogonality is useful for the MNIST tasks since they depend on long distance signal propagation with a single output at the end of the input sequence. On the other hand, character prediction with PTB produces an output at every time step. Constraining deviation from orthogonality proved detrimental for short sentences (Table 3) and beneficial when long sentences were included (Table 4). Furthermore, Glorot normal initialization did not perform worse than orthogonal initialization for PTB. Since an output is generated for every character in a sentence, short distance signal propagation is possible. Thus it is possible that the RNN is first learning very local dependencies between neighbouring characters and that given enough context, constraining deviation from orthogonality can help force the network to learn longer distance dependencies. 3.1.3 SPECTRAL AND GRADIENT EVOLUTION It is interesting to note that even long sequence lengths (T=1000) in the copy task can be solved efficiently with rather large margins on the spectrum. In Figure 2 we look at the gradient propagation of the loss from the last time step in the network with respect to the hidden activations. We can see that for a purely orthogonal parameterization of the transition matrix (when the margin is zero), the gradient norm is preserved across time steps, as expected. We further observe that with increasing margin size, the number of update steps over which this norm preservation survives decreases, though surprisingly not as quickly as expected. Although the deviation of singular values from one should be slowed by the sigmoidal parameterizations, even parameterizations without a sigmoid (no margin) can be effectively trained for all but the longest sequence lengths. This suggests that the spectrum is not deviating far from orthogonality and that inputs to the hidden to hidden transitions are mostly not aligned along the dimensions of great- est expansion or contraction. We evaluated the spread of the spectrum in all of our experiments and found that indeed, singular values tend to stay well within their prescribed bounds and only reach the margin when using a very large learning rate that does not permit convergence. Furthermore, when transition matrices are initialized as orthogonal, singular values remain near one throughout training even without a sigmoidal margin for tasks that require long term memory (copy, adding, sequential MNIST). On the other hand, singular value distributions tend to drift away from one for PTB character prediction which may help explain why enforcing an orthogonality constraint can be helpful for this task, when modeling long sequences. Interestingly, singular values spread out less for longer sequence lengths (nevertheless, the T=10000 copy task could not be solved with no sigmoid on the spectrum). We visualize the spread of singular values for different model parameterizations on the permuted sequential MNIST task in Figure 3. Curiously, we find that the distribution of singular values tends to shift upward to a mean of approximately 1.05 on both the ordered and permuted sequential MNIST tasks. We note that in those experiments, a tanh transition nonlinearity was used which is contractive in both the forward signal pass and the gradient backward pass. An upward shift in the distribution of singular values of the transition matrix would help compensate for that contraction. Indeed, (Saxe et al., 2013) describe this as a possibly good regime for learning in deep neural networks. That the model appears to evolve toward this regime suggests that deviating from it may incur a cost. This is interesting because the cost function cannot take into account numerical issues such as vanishing or exploding gradients (or forward signals); we do not know what could make this deviation costly. That the transition matrix may be compensating for the contraction of the tanh is supported by further experiments: applying a 1.05 pre-activation gain appears to allow a model with a margin of 0 to nearly match the top performance reached on both of the MNIST tasks. Furthermore, when using the OPLU norm-preserving activation function (Chernodub & Nowicki, 2016), we found that orthogonally initialized models performed equally well with all margins, achieving over 90% accuracy on the permuted sequential MNIST task. Unlike orthgonally initialized models, the RNN on the bottom right of Figure 3 with Glorot normal initialized transition matrices, begins and ends with a wide singular spectrum. While there is no clear positive shift in the distribution of singular values, the mean value appears to very gradually increase for both the ordered and permuted sequential MNIST tasks. If the model is to be expected to positively shift singular values to compensate for the contractivity of the tanh nonlinearity, it is not doing so well for the Glorot-initialized case; however, this may be due to the inefficiency of training as a result of vanishing gradients, given that initialization. 3.2 EXPLORING SOFT ORTHOGONALITY CONSTRAINTS Having established that it may indeed be useful to step away from orthogonality, here we explore two forms of soft constraints (rather than hard bounds as above) on hidden to hidden transition matrix orthogonality. The first is a simple penalty that directly encourages a transition matrix W to be orthogonal, of the form λ||WTW − I||22. This is similar to the orthogonality penalty introduced by Henaff et al. (2016). In the first two subfigures on the left of Figure 4, we explore the effect of weakening this form of regularization. We trained both a regular non-factorized RNN on the T = 200 copy task and a factorized RNN with orthogonal bases on the T = 500 copy task. For the regular RNN, we had to reduce the learning rate to 10−5. Here again we see that weakening the strength of the orthogonality-encouraging penalty can increase convergence speed. The second approach we explore replaces the sigmoidal margin parameterization with a mean one Gaussian prior on the singular values. In the two right subfigures of Figure 4, we visualize the accuracy on the length 200 copy task, using geoSGD (learning rate 10−6) to keep U and V orthogonal and different strengths of a Gaussian prior with mean one on the singular values. We trained these experiments with regular SGD on the spectrum and other non-orthogonal parameter matrices, using a 10−5 learning rate. We see that priors which are too strong lead to slow convergence. Loosening the strength of the prior makes the optimization more efficient. Furthermore, we compare a direct parameterization of the spectrum (no sigmoid) in Figure 4 with a sigmoidal parameterization, using a large margin of 1. Without the sigmoidal parameterization, optimization quickly becomes unstable; on the other hand, the optimization also becomes unstable if the prior is removed completely in the sigmoidal formulation (margin 1). These results further motivate the idea that parameterizations that deviate from orthogonality may perform better than purely orthogonal ones, as long as they are sufficiently constrained to avoid instability during training. 4 CONCLUSIONS We have explored a number of methods for controlling the expansivity of gradients during backpropagation based learning in RNNs through manipulating orthogonality constraints and regularization on matrices. Our experiments indicate that while orthogonal initialization may be beneficial, maintaining constraints on orthogonality can be detrimental. Indeed, moving away from hard constraints on matrix orthogonality can help improve optimization convergence rate and model performance. However, we also observe with synthetic tasks that relaxing regularization which encourages the spectral norms of weight matrices to be close to one, or allowing bounds on the spectral norms of weight matrices to be too wide, can reverse these gains and may lead to unstable optimization. ACKNOWLEDGMENTS We thank the Natural Sciences and Engineeering Research Council (NSERC) of Canada and Samsung for supporting this research. 5 APPENDIX 5.1 ADDITIONAL FIGURES 5.2 COPY TASK NONLINEARITY We found that nonlinearities such as a rectified linear unit (ReLU) (Nair & Hinton, 2010) or hyperbolic tangent (tanh) made the copy task far more difficult to solve. Using tanh, a short sequence length (T = 100) copy task required both a soft constraint that encourages orthogonality and thousands of epochs for training. It is worth noting that in the unitary evolution recurrent neural network of Arjovsky et al. (2015), the non-linearity (referred to as the ”modReLU”) is actually initialized as an identity operation that is free to deviate from identity during training. Furthermore, Henaff et al. (2016) derive a solution mechanism for the copy task that drops the non-linearity from an RNN. To explore this further, we experimented with a parametric leaky ReLU activation function (PReLU) which introduces a trainable slope α for negative valued inputs x , producing f (x ) = max(x , 0) + αmin(x , 0) (He et al., 2015). Setting the slope α to one would make the PReLU equivalent to an identity function. We experimented with clamping α to 0.5, 0.7 or 1 in a factorized RNN with a spectral margin of 0.3 and found that only the model with α = 1 solved the T = 1000 length copy task. We also experimented with a trainable slope α, initialized to 0.7 and found that it converges to 0.96, further suggesting the optimal solution for the copy task is without a transition nonlinearity. Since the copy task is purely a memory task, one may imagine that a transition nonlinearity such as a tanh or ReLU may be detrimental to the task as it can lose information. Thus, we also tried a recent activation function that preserves information, called an orthogonal permutation linear unit (OPLU) (Chernodub & Nowicki, 2016). The OPLU preserves norm, making a fully norm-preserving RNN possible. Interestingly, this activation function allowed us to recover identical results on the copy task to those without a nonlinearity for different spectral margins. 5.3 METHOD RUNNING TIME Although the method proposed in section 2 relies on a matrix inversion, an operation with O(n3) complexity for an n × n matrix, the running time of an RNN factorized in such a way actually remains reasonable. This running time is summarized in Table 5 and includes all computations in the graph, together with the matrix inversion. As this method is meant to be used only for the analysis in this work, we find the running times acceptable for that purpose. Models were run on an Nvidia GTX-770 GPU and were run against the T=100 length copy task.
1. What is the main contribution of the paper regarding the impact of orthogonal weight matrices on learning dynamics in RNNs? 2. What are the strengths of the paper, particularly in the experimental results and insights gained? 3. Are there any concerns or suggestions for improving the experimental results, such as optimizing the learning rate for each regularization strength? 4. How does the paper address the question of whether orthogonality is useful as an initialization or a regularizer? 5. Can the authors provide further discussion on the initialization vs regularization dimension in the text?
Review
Review This paper investigates the impact of orthogonal weight matrices on learning dynamics in RNNs. The paper proposes a variety of interesting optimization formulations that enforce orthogonality in the recurrent weight matrix to varying degrees. The experimental results demonstrate several conclusions: enforcing exact orthogonality does not help learning, while enforcing soft orthogonality or initializing to orthogonal weights can substantially improve learning. While some of the optimization methods proposed currently require matrix inversion and are therefore slow in wall clock time, orthogonal initialization and some of the soft orthogonality constraints are relatively inexpensive and may find their way into practical use. The experiments are generally done to a high standard and yield a variety of useful insights, and the writing is clear. The experimental results are based on using a fixed learning rate for the different regularization strengths. Learning speed might be highly dependent on this, and different strengths may admit different maximal stable learning rates. It would be instructive to optimize the learning rate for each margin separately (maybe on one of the shorter sequence lengths) to see how soft orthogonality impacts the stability of the learning process. Fig. 5, for instance, shows that a sigmoid improves stability—but perhaps slightly reducing the learning rate for the non-sigmoid Gaussian prior RNN would make the learning well-behaved again for weightings less than 1. Fig. 4 shows singular values converging around 1.05 rather than 1. Does initializing to orthogonal matrices multiplied by 1.05 confer any noticeable advantage over standard orthogonal matrices? Especially on the T=10K copy task? “Curiously, larger margins and even models without sigmoidal constraints on the spectrum (no margin) performed well as long as they were initialized to be orthogonal suggesting that evolution away from orthogonality is not a serious problem on this task.” This is consistent with the analysis given in Saxe et al. 2013, where for deep linear nets, if a singular value is initialized to 1 but dies away during training, this is because it must be zero to implement the desired input-output map. More broadly, an open question has been whether orthogonality is useful as an initialization, as proposed by Saxe et al., where its role is mainly as a preconditioner which makes optimization proceed quickly but doesn’t fundamentally change the optimization problem; or whether it is useful as a regularizer, as proposed by Arjovsky et al. 2015 and Henaff et al. 2015, that is, as an additional constraint in the optimization problem (minimize loss subject to weights being orthogonal). These experiments seem to show that mere initialization to orthogonal weights is enough to reap an optimization speed advantage, and that too much regularization begins to hurt performance—i.e., substantially changing the optimization problem is undesirable. This point is also apparent in Fig. 2: In terms of the training loss on MNIST (Fig. 2), no margin does almost indistinguishably from a margin of 1 or .1. However in terms of accuracy, a margin of .1 is best. This shows that large or nonexistent margins (i.e., orthogonal initializations) enable fast optimization of the training loss, but among models that attain similar training loss, the more nearly orthogonal weights perform better. This starts to separate out the optimization speed advantage conferred by orthogonality from the regularization advantage it confers. It may be useful to more explicitly discuss the initialization vs regularization dimension in the text. Overall, this paper contributes a variety of techniques and intuitions which are likely to be useful in training RNNs.
ICLR
Title On orthogonality and learning recurrent networks with long term dependencies Abstract It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we can bound matrix norms and therein control the degree of expansivity induced during backpropagation. 1 INTRODUCTION The depth of deep neural networks confers representational power, but also makes model optimization more challenging. Training deep networks with gradient descent based methods is known to be difficult as a consequence of the vanishing and exploding gradient problem (Hochreiter & Schmidhuber, 1997). Typically, exploding gradients are avoided by clipping large gradients (Pascanu et al., 2013) or introducing an L2 or L1 weight norm penalty. The latter has the effect of bounding the spectral radius of the linear transformations, thus limiting the maximal gain across the transformation. Krueger & Memisevic (2015) attempt to stabilize the norm of propagating signals directly by penalizing differences in successive norm pairs in the forward pass and Pascanu et al. (2013) propose to penalize successive gradient norm pairs in the backward pass. These regularizers affect the network parameterization with respect to the data instead of penalizing weights directly. Both expansivity and contractivity of linear transformations can also be limited by more tightly bounding their spectra. By limiting the transformations to be orthogonal, their singular spectra are limited to unitary gain causing the transformations to be norm-preserving. Le et al. (2015) and Henaff et al. (2016) have respectively shown that identity initialization and orthogonal initialization can be beneficial. Arjovsky et al. (2015) have gone beyond initialization, building unitary recurrent neural network (RNN) models with transformations that are unitary by construction which they achieved by composing multiple basic unitary transformations. The resulting transformations, for some n-dimensional input, cover only some subset of possible n × n unitary matrices but appear to perform well on simple tasks and have the benefit of having low complexity in memory and computation. The entire set of possible unitary or orthogonal parameterizations forms the Stiefel manifold. At a much higher computational cost, gradient descent optimization directly along this manifold can be done via geodesic steps (Nishimori, 2005; Tagare, 2011). Recent work (Wisdom et al., 2016) has proposed the optimization of unitary matrices along the Stiefel manifold using geodesic gradient descent. To produce a full-capacity parameterization for unitary matrices they use some insights from Tagare (2011), combining the use of a canonical inner products and Cayley transformations. Their experimental work indicates that full capacity unitary RNN models can solve the copy memory problem whereas both LSTM networks and restricted capacity unitary RNN models having similar complexity appear unable to solve the task for a longer sequence length (T = 2000). In contrast, here we explore the optimization of real valued matrices within a configurable margin about the Stiefel manifold. We suspect that a strong constraint of orthogonality limits the model’s representational power, hindering its performance, and may make optimization more difficult. We explore this hypothesis empirically by employing a factorization technique that allows us to limit the degree of deviation from the Stiefel manifold. While we use geodesic gradient descent, we simultaneously update the singular spectra of our matrices along Euclidean steps, allowing optimization to step away from the manifold while still curving about it. 1.1 VANISHING AND EXPLODING GRADIENTS The issue of vanishing and exploding gradients as it pertains to the parameterization of neural networks can be illuminated by looking at the gradient back-propagation chain through a network. A neural network with n hidden layers has pre-activations ai(hi−1) = Wi hi−1 + bi, i ∈ {2, · · · , n} (1) For notational convenience, we combine parameters Wi and bi to form an affine matrix θ. We can see that for some loss function L at layer n , the derivative with respect to parameters θi is: ∂L ∂θi = ∂an+1 ∂θi ∂L ∂an+1 (2) The partial derivatives for the pre-activations can be decomposed as follows: ∂ai+1 ∂θi = ∂ai ∂θi ∂hi ∂ai ∂ai+1 ∂hi = ∂ai ∂θi DiWi+1 → ∂ai+1 ∂ai = DiWi+1, (3) where Di is the Jacobian corresponding to the activation function, containing partial derivatives of the hidden units at layer i + 1 with respect to the pre-activation inputs. Typically, D is diagonal. Following the above, the gradient in equation 2 can be fully decomposed into a recursive chain of matrix products: ∂L ∂θi = ∂ai ∂θi n∏ j=i (DjWj+1) ∂L ∂an+1 (4) In (Pascanu et al., 2013), it is shown that the 2-norm of ∂ai+1 ∂ai is bounded by the product of the norms of the non-linearity’s Jacobian and transition matrix at time t (layer i ), as follows:∣∣∣∣∣∣∣∣∂at+1∂at ∣∣∣∣∣∣∣∣ ≤ ||Dt|| ||Wt|| ≤ λDt λWt = ηt, λDt , λWt ∈ R. (5) where λDt and λWt are the largest singular values of the non-linearity’s Jacobian Dt and the transition matrix Wt . In RNNs, Wt is shared across time and can be simply denoted as W. Equation 5 shows that the gradient can grow or shrink at each layer depending on the gain of each layer’s linear transformation W and the gain of the Jacobian D. The gain caused by each layer is magnified across all time steps or layers. It is easy to have extreme amplification in a recurrent neural network where W is shared across time steps and a non-unitary gain in W is amplified exponentially. The phenomena of extreme growth or contraction of the gradient across time steps or layers are known as the exploding and the vanishing gradient problems, respectively. It is sufficient for RNNs to have ηt ≤ 1 at each time t to enable the possibility of vanishing gradients, typically for some large number of time steps T . The rate at which a gradient (or forward signal) vanishes depends on both the parameterization of the model and on the input data. The parameterization may be conditioned by placing appropriate constraints on W. It is worth keeping in mind that the Jacobian D is typically contractive, thus tending to be norm-reducing) and is also data-dependent, whereas W can vary from being contractive to norm-preserving, to expansive and applies the same gain on the forward signal as on the back-propagated gradient signal. 2 OUR APPROACH Vanishing and exploding gradients can be controlled to a large extent by controlling the maximum and minimum gain of W. The maximum gain of a matrix W is given by the spectral norm which is given by ||W||2 = max [ ||Wx|| ||x|| ] . (6) By keeping our weight matrix W close to orthogonal, one can ensure that it is close to a normpreserving transformation (where the spectral norm is equal to one, but the minimum gain is also one). One way to achieve this is via a simple soft constraint or regularization term of the form: λ ∑ i ||WTi Wi − I||2. (7) However, it is possible to formulate a more direct parameterization or factorization for W which permits hard bounds on the amount of expansion and contraction induced by W. This can be achieved by simply parameterizing W according to its singular value decomposition, which consists of the composition of orthogonal basis matrices U and V with a diagonal spectral matrix S containing the singular values which are real and positive by definition. We have W = USVT . (8) Since the spectral norm or maximum gain of a matrix is equal to its largest singular value, this decomposition allows us to control the maximum gain or expansivity of the weight matrix by controlling the magnitude of the largest singular value. Similarly, the minimum gain or contractivity of a matrix can be obtained from the minimum singular value. We can keep the bases U and V orthogonal via geodesic gradient descent along the set of weights that satisfy UTU = I and VTV = I respectively. The submanifolds that satisfy these constraints are called Stiefel manifolds. We discuss how this is achieved in more detail below, then discuss our construction for bounding the singular values. During optimization, in order to maintain the orthogonality of an orthogonally-initialized matrix M, i.e. where M = U, M = V or M = W if so desired, we employ a Cayley transformation of the update step onto the Stiefel manifold of (semi-)orthogonal matrices, as in Nishimori (2005) and Tagare (2011). Given an orthogonally-initialized parameter matrix M and its Jacobian, G with respect to the objective function, an update is performed as follows: A = GMT −MGT Mnew = M+ (I+ η 2 A)−1(I− η 2 A), (9) where A is a skew-symmetric matrix (that depends on the Jacobian and on the parameter matrix) which is mapped to an orthogonal matrix via a Cayley transform and η is the learning rate. While the update rule in (9) allows us to maintain an orthogonal hidden to hidden transition matrix W if desired, we are interested in exploring the effect of stepping away from the Stiefel manifold. As such, we parameterize the transition matrix W in factorized form, as a singular value decomposition with orthogonal bases U and V updated by geodesic gradient descent using the Cayley transform approach above. If W is an orthogonal matrix, the singular values in the diagonal matrix S are all equal to one. However, in our formulation we allow these singular values to deviate from one and employ a sigmoidal parameterization to apply a hard constraint on the maximum and minimum amount of deviation. Specifically, we define a margin m around 1 within which the singular values must lie. This is achieved with the parameterization si = 2m(σ(pi)− 0.5) + 1, si ∈ {diag(S)}, m ∈ [0, 1]. (10) The singular values are thus restricted to the range [1−m, 1 +m] and the underlying parameters pi are updated freely via stochastic gradient descent. Note that this parameterization strategy also has implications on the step sizes that gradient descent based optimization will take when updating the singular values – they tend to be smaller compared to models with no margin constraining their values. Specifically, a singular value’s progression toward a margin is slowed the closer it is to the margin. The sigmoidal parameterization can also impart another effect on the step size along the spectrum which needs to be accounted for. Considering 10, the gradient backpropagation of some loss L toward parameters pi is found as dL dpi = dsi dpi dL dsi = 2m dσ(pi) dpi dL dsi . (11) From (11), it can be seen that the magnitude of the update step for pi is scaled by the margin hyperparameter m . This means for example that for margins less than one, the effective learning rate for the spectrum is reduced in proportion to the margin. Consequently, we adjust the learning rate along the spectrum to be independent of the margin by renormalizing it by 2m . This margin formulation both guarantees singular values lie within a well defined range and slows deviation from orthogonality. Alternatively, one could enforce the orthogonality of U and V and impose a regularization term corresponding to a mean one Gaussian prior on these singular values. This encourages the weight matrix W to be norm preserving with a controllable strength equivalent to the variance of the Gaussian. We also explore this approach further below. 3 EXPERIMENTS In this section, we explore hard and soft orthogonality constraints on factorized weight matrices for recurrent neural network hidden to hidden transitions. With hard orthogonality constraints on U and V, we investigate the effect of widening the spectral margin or bounds on convergence and performance. Loosening these bounds allows increasingly larger margins within which the transition matrix W can deviate from orthogonality. We confirm that orthogonal initialization is useful as noted in Henaff et al. (2016), and we show that although strict orthogonality guarantees stable gradient norm, loosening orthogonality constraints can increase the rate of gradient descent convergence. We begin our analyses on tasks that are designed to stress memory: a sequence copying task and a basic addition task (Hochreiter & Schmidhuber, 1997). We then move on to tasks on real data that require models to capture long-range dependencies: digit classification based on sequential and permuted MNIST vectors (Le et al., 2015; LeCun et al., 1998). Finally, we look at a basic language modeling task using the Penn Treebank dataset (Marcus et al., 1993). The copy and adding tasks, introduced by Hochreiter & Schmidhuber (1997), are synthetic benchmarks with pathologically hard long distance dependencies that require long-term memory in models. The copy task consists of an input sequence that must be remembered by the network, followed by a series of blank inputs terminated by a delimiter that denotes the point at which the network must begin to output a copy of the initial sequence. We use an input sequence of T + 20 elements that begins with a sub-sequence of 10 elements to copy, each containing a symbol ai ∈ {a1 , ..., ap} out of p = 8 possible symbols. This sub-sequence is followed by T − 1 elements of the blank category a0 which is terminated at step T by a delimiter symbol ap+1 and 10 more elements of the blank category. The network must learn to remember the initial 10 element sequence for T time steps and output it after receiving the delimiter symbol. The goal of the adding task is to add two numbers together after a long delay. Each number is randomly picked at a unique position in a sequence of length T . The sequence is composed of T values sampled from a uniform distribution in the range [0, 1), with each value paired with an indicator value that identifies the value as one of the two numbers to remember (marked 1) or as a value to ignore (marked 0). The two numbers are positioned randomly in the sequence, the first in the range [0, T2 − 1] and the second in the range [ T 2 , T − 1], where 0 marks the first element. The network must learn to identify and remember the two numbers and output their sum. The sequential MNIST task from Le et al. (2015), MNIST digits are flattened into vectors that can be traversed sequentially by a recurrent neural network. The goal is to classify the digit based on the sequential input of pixels. The simple variant of this task is with a simple flattening of the image matrices; the harder variant of this task includes a random permutation of the pixels in the input vector that is determined once for an experiment. The latter formulation introduces longer distance dependencies between pixels that must be interpreted by the classification model. The English Penn Treebank (PTB) dataset from Marcus et al. (1993) is an annotated corpus of English sentences, commonly used for benchmarking language models. We employ a sequential character prediction task: given a sentence, a recurrent neural network must predict the next character at each step, from left to right. We use input sequences of variable length, with each sequence containing one sentence. We model 49 characters including lowercase letters (all strings are in lowercase), numbers, common punctuation, and an unknown character placeholder. In our experiments on two subsets of the data: in the first, we first use 23% of the data with strings with up to 75 characters and in the second we include over 99% of the dataset, picking strings with up to 300 characters. 3.1 LOOSENING HARD ORTHOGONALITY CONSTRAINTS In this section, we experimentally explore the effect of loosening hard orthogonality constraints through loosening the spectral margin defined above for the hidden to hidden transition matrix. In all experiments, we employed RMSprop (Tieleman & Hinton, 2012) when not using geodesic gradient descent. We used minibatches of size 50 and for generated data (the copy and adding tasks), we assumed an epoch length of 100 minibatches. We cautiously introduced gradient clipping at magnitude 100 (unless stated otherwise) in all of our RNN experiments although it may not be required and we consistently applied a small weight decay of 0.0001. Unless otherwise specified, we trained all simple recurrent neural networks with the hidden to hidden matrix factorization as in (8) using geodesic gradient descent on the bases (learning rate 10−6) and RMSprop on the other parameters (learning rate 0.0001), using a tanh transition nonlinearity, and clipping gradients of 100 magnitude. The neural network code was built on the Theano framework (Theano Development Team, 2016). When parameterizing a matrix in factorized form, we apply the weight decay on the composite matrix rather than on the factors in order to be consistent across experiments. For MNIST and PTB, test set metrics were computed based on the parameterization that gave the best validation set accuracy. 3.1.1 CONVERGENCE ON SYNTHETIC MEMORY TASKS For different sequence lengths T of the copy and adding tasks, we trained a factorized RNN with 128 hidden units and various spectral margins m . For the copy task, we used Elman networks without a transition non-linearity as in Henaff et al. (2016). We discuss our investigations into the use of a non-linearity on the copy task in the Appendix. As shown in Figure 1 we see an increase in the rate of convergence as we increase the spectral margin. This observation generally holds across the tested sequence lengths (T = 200, T = 500, T = 1000, T = 10000); however, large spectral margins hinder convergence on extremely long sequence lengths. At sequence length T = 10000, parameterizations with spectral margins larger than 0.001 converge slower than when using a margin of 0.001. In addition, the experiment without a margin failed to converge on the longest sequence length. This follows the expected pattern where stepping away from the Stiefel manifold may help with gradient descent optimization but loosening orthogonality constraints can reduce the stability of signal propagation through the network. For the adding task, we trained a factorized RNN on T = 1000 length sequences, using a ReLU activation function on the hidden to hidden transition matrix. The mean squared error (MSE) is shown for different spectral margins in Figure 5 in the Appendix. Testing spectral margins m = 0, m = 1, m = 10, m = 100, and no margin, we find that the models with the purely orthogonal (m = 0) and the unconstrained (no margin) transition matrices failed to begin converging beyond baseline MSE within 2000 epochs. 3.1.2 PERFORMANCE ON REAL DATA Having confirmed that an orthogonality constraint can negatively impact convergence rate, we seek to investigate the effect on model performance for tasks on real data. We show the results of experiments on permuted sequential MNIST in Table 2 and ordered sequential MNIST in Table 1. The loss curves are shown in Figure 6 in the Appendix and reveal an increased convergence rate for larger spectral margins. We trained the factorized RNN models with 128 hidden units for 120 epochs. We also trained an LSTM with 128 hidden units on both tasks for 150 epochs, configured with peephole connections, orthogonally initialized (and forget gate bias initialized to one), and trained with RMSprop (learning rate 0.0001, clipping gradients of magnitude 1). We show the results of experiments on PTB character prediction, in terms of bits per character (bpc) and prediction accuracy, for a subset of short sequences (up to 75 characters; 23% of data) in Table 3 and for a subset of long sequences (up to 300 characters; 99% of data) in Table 4. We trained factorized RNN models with 512 hidden units for 200 epochs with geodesic gradient descent on the bases (learning rate 10−6) and RMSprop on the other parameters (learning rate 0.001), using a tanh transition nonlinearity, and clipping gradients of 30 magnitude. Interestingly, for both the ordered and permuted sequential MNIST tasks, models with a non-zero margin significantly outperform those that are constrained to have purely orthogonal transition matri- ces (margin of zero). The best results on both the ordered and sequential MNIST tasks were yielded by models with a spectral margin of 0.1, at 94.10% accuracy and 91.44% accuracy, respectively. An LSTM outperformed the RNNs in both tasks; nevertheless, RNNs with hidden to hidden transitions initialized as orthogonal matrices performed admirably without a memory component and without all of the additional parameters associated with gates. Indeed, orthogonally initialized RNNs performed almost on par with the LSTM in the permuted sequential MNIST task which presents longer distance dependencies than the ordered task. Although the optimal margin appears to be 0.1, RNNs with large margins perform almost identically to an RNN without a margin, as long as the transition matrix is initialized as orthogonal. On these tasks, orthogonal initialization appears to significantly outperform Glorot normal initialization (Glorot & Bengio, 2010) or initializing the matrix as identity. It is interesting to note that for the MNIST tasks, orthogonal initialization appears useful while orthogonality constraints appear mainly detrimental. This suggests that while orthogonality helps early training by stabilizing gradient flow across many time steps, orthogonality constraints may need to be loosened on some tasks so as not to over-constrain the model’s representational ability. Curiously, larger margins and even models without sigmoidal constraints on the spectrum (no margin) performed well as long as they were initialized to be orthogonal, suggesting that evolution away from orthogonality is not a serious problem on MNIST. It is not surprising that orthogonality is useful for the MNIST tasks since they depend on long distance signal propagation with a single output at the end of the input sequence. On the other hand, character prediction with PTB produces an output at every time step. Constraining deviation from orthogonality proved detrimental for short sentences (Table 3) and beneficial when long sentences were included (Table 4). Furthermore, Glorot normal initialization did not perform worse than orthogonal initialization for PTB. Since an output is generated for every character in a sentence, short distance signal propagation is possible. Thus it is possible that the RNN is first learning very local dependencies between neighbouring characters and that given enough context, constraining deviation from orthogonality can help force the network to learn longer distance dependencies. 3.1.3 SPECTRAL AND GRADIENT EVOLUTION It is interesting to note that even long sequence lengths (T=1000) in the copy task can be solved efficiently with rather large margins on the spectrum. In Figure 2 we look at the gradient propagation of the loss from the last time step in the network with respect to the hidden activations. We can see that for a purely orthogonal parameterization of the transition matrix (when the margin is zero), the gradient norm is preserved across time steps, as expected. We further observe that with increasing margin size, the number of update steps over which this norm preservation survives decreases, though surprisingly not as quickly as expected. Although the deviation of singular values from one should be slowed by the sigmoidal parameterizations, even parameterizations without a sigmoid (no margin) can be effectively trained for all but the longest sequence lengths. This suggests that the spectrum is not deviating far from orthogonality and that inputs to the hidden to hidden transitions are mostly not aligned along the dimensions of great- est expansion or contraction. We evaluated the spread of the spectrum in all of our experiments and found that indeed, singular values tend to stay well within their prescribed bounds and only reach the margin when using a very large learning rate that does not permit convergence. Furthermore, when transition matrices are initialized as orthogonal, singular values remain near one throughout training even without a sigmoidal margin for tasks that require long term memory (copy, adding, sequential MNIST). On the other hand, singular value distributions tend to drift away from one for PTB character prediction which may help explain why enforcing an orthogonality constraint can be helpful for this task, when modeling long sequences. Interestingly, singular values spread out less for longer sequence lengths (nevertheless, the T=10000 copy task could not be solved with no sigmoid on the spectrum). We visualize the spread of singular values for different model parameterizations on the permuted sequential MNIST task in Figure 3. Curiously, we find that the distribution of singular values tends to shift upward to a mean of approximately 1.05 on both the ordered and permuted sequential MNIST tasks. We note that in those experiments, a tanh transition nonlinearity was used which is contractive in both the forward signal pass and the gradient backward pass. An upward shift in the distribution of singular values of the transition matrix would help compensate for that contraction. Indeed, (Saxe et al., 2013) describe this as a possibly good regime for learning in deep neural networks. That the model appears to evolve toward this regime suggests that deviating from it may incur a cost. This is interesting because the cost function cannot take into account numerical issues such as vanishing or exploding gradients (or forward signals); we do not know what could make this deviation costly. That the transition matrix may be compensating for the contraction of the tanh is supported by further experiments: applying a 1.05 pre-activation gain appears to allow a model with a margin of 0 to nearly match the top performance reached on both of the MNIST tasks. Furthermore, when using the OPLU norm-preserving activation function (Chernodub & Nowicki, 2016), we found that orthogonally initialized models performed equally well with all margins, achieving over 90% accuracy on the permuted sequential MNIST task. Unlike orthgonally initialized models, the RNN on the bottom right of Figure 3 with Glorot normal initialized transition matrices, begins and ends with a wide singular spectrum. While there is no clear positive shift in the distribution of singular values, the mean value appears to very gradually increase for both the ordered and permuted sequential MNIST tasks. If the model is to be expected to positively shift singular values to compensate for the contractivity of the tanh nonlinearity, it is not doing so well for the Glorot-initialized case; however, this may be due to the inefficiency of training as a result of vanishing gradients, given that initialization. 3.2 EXPLORING SOFT ORTHOGONALITY CONSTRAINTS Having established that it may indeed be useful to step away from orthogonality, here we explore two forms of soft constraints (rather than hard bounds as above) on hidden to hidden transition matrix orthogonality. The first is a simple penalty that directly encourages a transition matrix W to be orthogonal, of the form λ||WTW − I||22. This is similar to the orthogonality penalty introduced by Henaff et al. (2016). In the first two subfigures on the left of Figure 4, we explore the effect of weakening this form of regularization. We trained both a regular non-factorized RNN on the T = 200 copy task and a factorized RNN with orthogonal bases on the T = 500 copy task. For the regular RNN, we had to reduce the learning rate to 10−5. Here again we see that weakening the strength of the orthogonality-encouraging penalty can increase convergence speed. The second approach we explore replaces the sigmoidal margin parameterization with a mean one Gaussian prior on the singular values. In the two right subfigures of Figure 4, we visualize the accuracy on the length 200 copy task, using geoSGD (learning rate 10−6) to keep U and V orthogonal and different strengths of a Gaussian prior with mean one on the singular values. We trained these experiments with regular SGD on the spectrum and other non-orthogonal parameter matrices, using a 10−5 learning rate. We see that priors which are too strong lead to slow convergence. Loosening the strength of the prior makes the optimization more efficient. Furthermore, we compare a direct parameterization of the spectrum (no sigmoid) in Figure 4 with a sigmoidal parameterization, using a large margin of 1. Without the sigmoidal parameterization, optimization quickly becomes unstable; on the other hand, the optimization also becomes unstable if the prior is removed completely in the sigmoidal formulation (margin 1). These results further motivate the idea that parameterizations that deviate from orthogonality may perform better than purely orthogonal ones, as long as they are sufficiently constrained to avoid instability during training. 4 CONCLUSIONS We have explored a number of methods for controlling the expansivity of gradients during backpropagation based learning in RNNs through manipulating orthogonality constraints and regularization on matrices. Our experiments indicate that while orthogonal initialization may be beneficial, maintaining constraints on orthogonality can be detrimental. Indeed, moving away from hard constraints on matrix orthogonality can help improve optimization convergence rate and model performance. However, we also observe with synthetic tasks that relaxing regularization which encourages the spectral norms of weight matrices to be close to one, or allowing bounds on the spectral norms of weight matrices to be too wide, can reverse these gains and may lead to unstable optimization. ACKNOWLEDGMENTS We thank the Natural Sciences and Engineeering Research Council (NSERC) of Canada and Samsung for supporting this research. 5 APPENDIX 5.1 ADDITIONAL FIGURES 5.2 COPY TASK NONLINEARITY We found that nonlinearities such as a rectified linear unit (ReLU) (Nair & Hinton, 2010) or hyperbolic tangent (tanh) made the copy task far more difficult to solve. Using tanh, a short sequence length (T = 100) copy task required both a soft constraint that encourages orthogonality and thousands of epochs for training. It is worth noting that in the unitary evolution recurrent neural network of Arjovsky et al. (2015), the non-linearity (referred to as the ”modReLU”) is actually initialized as an identity operation that is free to deviate from identity during training. Furthermore, Henaff et al. (2016) derive a solution mechanism for the copy task that drops the non-linearity from an RNN. To explore this further, we experimented with a parametric leaky ReLU activation function (PReLU) which introduces a trainable slope α for negative valued inputs x , producing f (x ) = max(x , 0) + αmin(x , 0) (He et al., 2015). Setting the slope α to one would make the PReLU equivalent to an identity function. We experimented with clamping α to 0.5, 0.7 or 1 in a factorized RNN with a spectral margin of 0.3 and found that only the model with α = 1 solved the T = 1000 length copy task. We also experimented with a trainable slope α, initialized to 0.7 and found that it converges to 0.96, further suggesting the optimal solution for the copy task is without a transition nonlinearity. Since the copy task is purely a memory task, one may imagine that a transition nonlinearity such as a tanh or ReLU may be detrimental to the task as it can lose information. Thus, we also tried a recent activation function that preserves information, called an orthogonal permutation linear unit (OPLU) (Chernodub & Nowicki, 2016). The OPLU preserves norm, making a fully norm-preserving RNN possible. Interestingly, this activation function allowed us to recover identical results on the copy task to those without a nonlinearity for different spectral margins. 5.3 METHOD RUNNING TIME Although the method proposed in section 2 relies on a matrix inversion, an operation with O(n3) complexity for an n × n matrix, the running time of an RNN factorized in such a way actually remains reasonable. This running time is summarized in Table 5 and includes all computations in the graph, together with the matrix inversion. As this method is meant to be used only for the analysis in this work, we find the running times acceptable for that purpose. Models were run on an Nvidia GTX-770 GPU and were run against the T=100 length copy task.
1. What is the main contribution of the paper, and how does it fit into recent research in recurrent neural networks? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding its empirical investigation and theoretical formulation? 3. How does the reviewer assess the paper's experimental side, and what kind of evaluations would they like to see in future studies? 4. Are there any concerns about the practicality of the proposed method, especially when comparing it to other architectures like LSTMs? 5. How does the reviewer perceive the overall value and significance of the paper, considering its limitations and potential applications?
Review
Review The paper is well-motivated, and is part of a line of recent work investigating the use of orthogonal weight matrices within recurrent neural networks. While using orthogonal weights addresses the issue of vanishing/exploding gradients, it is unclear whether anything is lost, either in representational power or in trainability, by enforcing orthogonality. As such, an empirical investigation that examines how these properties are affected by deviation from orthogonality is a useful contribution. The paper is clearly written, and the primary formulation for investigating soft orthogonality constraints (representing the weight matrices in their SVD factorized form, which gives explicit control over the singular values) is clean and natural, albeit not necessarily ideal from a practical computational standpoint (as it requires maintaining multiple orthogonal weight matrices each requiring an expensive update step). I am unaware of this approach being investigated previously. The experimental side, however, is somewhat lacking. The paper evaluates two tasks: a copy task, using an RNN architecture without transition non-linearities, and sequential/permuted sequential MNIST. These are reasonable choices for an initial evaluation, but are both toy problems and don't shed much light on the practical aspects of the proposed approaches. An evaluation in a more realistic setting would be valuable (e.g., a language modeling task). Furthermore, while investigating pure RNN's makes sense for evaluating effects of orthogonality, it feels somewhat academic: LSTMs also provide a mechanism to capture longer-term dependencies, and in the tasks where the proposed approach was compared directly to an LSTM, it was significantly outperformed. It would be very interesting to see the effects of the proposed soft orthogonality constraint in additional architectures (e.g., deep feed-forward architectures, or whether there's any benefit when embedded within an LSTM, although this seems doubtful). Overall, the paper addresses a clear-cut question with a well-motivated approach, and has interesting findings on some toy datasets. As such I think it could provide a valuable contribution. However, the significance of the work is restricted by the limited experimental settings (both datasets and network architectures).
ICLR
Title On orthogonality and learning recurrent networks with long term dependencies Abstract It is well known that it is challenging to train deep neural networks and recurrent neural networks for tasks that exhibit long term dependencies. The vanishing or exploding gradient problem is a well known issue associated with these challenges. One approach to addressing vanishing and exploding gradients is to use either soft or hard constraints on weight matrices so as to encourage or enforce orthogonality. Orthogonal matrices preserve gradient norm during backpropagation and can therefore be a desirable property; however, we find that hard constraints on orthogonality can negatively affect the speed of convergence and model performance. This paper explores the issues of optimization convergence, speed and gradient stability using a variety of different methods for encouraging or enforcing orthogonality. In particular we propose a weight matrix factorization and parameterization strategy through which we can bound matrix norms and therein control the degree of expansivity induced during backpropagation. 1 INTRODUCTION The depth of deep neural networks confers representational power, but also makes model optimization more challenging. Training deep networks with gradient descent based methods is known to be difficult as a consequence of the vanishing and exploding gradient problem (Hochreiter & Schmidhuber, 1997). Typically, exploding gradients are avoided by clipping large gradients (Pascanu et al., 2013) or introducing an L2 or L1 weight norm penalty. The latter has the effect of bounding the spectral radius of the linear transformations, thus limiting the maximal gain across the transformation. Krueger & Memisevic (2015) attempt to stabilize the norm of propagating signals directly by penalizing differences in successive norm pairs in the forward pass and Pascanu et al. (2013) propose to penalize successive gradient norm pairs in the backward pass. These regularizers affect the network parameterization with respect to the data instead of penalizing weights directly. Both expansivity and contractivity of linear transformations can also be limited by more tightly bounding their spectra. By limiting the transformations to be orthogonal, their singular spectra are limited to unitary gain causing the transformations to be norm-preserving. Le et al. (2015) and Henaff et al. (2016) have respectively shown that identity initialization and orthogonal initialization can be beneficial. Arjovsky et al. (2015) have gone beyond initialization, building unitary recurrent neural network (RNN) models with transformations that are unitary by construction which they achieved by composing multiple basic unitary transformations. The resulting transformations, for some n-dimensional input, cover only some subset of possible n × n unitary matrices but appear to perform well on simple tasks and have the benefit of having low complexity in memory and computation. The entire set of possible unitary or orthogonal parameterizations forms the Stiefel manifold. At a much higher computational cost, gradient descent optimization directly along this manifold can be done via geodesic steps (Nishimori, 2005; Tagare, 2011). Recent work (Wisdom et al., 2016) has proposed the optimization of unitary matrices along the Stiefel manifold using geodesic gradient descent. To produce a full-capacity parameterization for unitary matrices they use some insights from Tagare (2011), combining the use of a canonical inner products and Cayley transformations. Their experimental work indicates that full capacity unitary RNN models can solve the copy memory problem whereas both LSTM networks and restricted capacity unitary RNN models having similar complexity appear unable to solve the task for a longer sequence length (T = 2000). In contrast, here we explore the optimization of real valued matrices within a configurable margin about the Stiefel manifold. We suspect that a strong constraint of orthogonality limits the model’s representational power, hindering its performance, and may make optimization more difficult. We explore this hypothesis empirically by employing a factorization technique that allows us to limit the degree of deviation from the Stiefel manifold. While we use geodesic gradient descent, we simultaneously update the singular spectra of our matrices along Euclidean steps, allowing optimization to step away from the manifold while still curving about it. 1.1 VANISHING AND EXPLODING GRADIENTS The issue of vanishing and exploding gradients as it pertains to the parameterization of neural networks can be illuminated by looking at the gradient back-propagation chain through a network. A neural network with n hidden layers has pre-activations ai(hi−1) = Wi hi−1 + bi, i ∈ {2, · · · , n} (1) For notational convenience, we combine parameters Wi and bi to form an affine matrix θ. We can see that for some loss function L at layer n , the derivative with respect to parameters θi is: ∂L ∂θi = ∂an+1 ∂θi ∂L ∂an+1 (2) The partial derivatives for the pre-activations can be decomposed as follows: ∂ai+1 ∂θi = ∂ai ∂θi ∂hi ∂ai ∂ai+1 ∂hi = ∂ai ∂θi DiWi+1 → ∂ai+1 ∂ai = DiWi+1, (3) where Di is the Jacobian corresponding to the activation function, containing partial derivatives of the hidden units at layer i + 1 with respect to the pre-activation inputs. Typically, D is diagonal. Following the above, the gradient in equation 2 can be fully decomposed into a recursive chain of matrix products: ∂L ∂θi = ∂ai ∂θi n∏ j=i (DjWj+1) ∂L ∂an+1 (4) In (Pascanu et al., 2013), it is shown that the 2-norm of ∂ai+1 ∂ai is bounded by the product of the norms of the non-linearity’s Jacobian and transition matrix at time t (layer i ), as follows:∣∣∣∣∣∣∣∣∂at+1∂at ∣∣∣∣∣∣∣∣ ≤ ||Dt|| ||Wt|| ≤ λDt λWt = ηt, λDt , λWt ∈ R. (5) where λDt and λWt are the largest singular values of the non-linearity’s Jacobian Dt and the transition matrix Wt . In RNNs, Wt is shared across time and can be simply denoted as W. Equation 5 shows that the gradient can grow or shrink at each layer depending on the gain of each layer’s linear transformation W and the gain of the Jacobian D. The gain caused by each layer is magnified across all time steps or layers. It is easy to have extreme amplification in a recurrent neural network where W is shared across time steps and a non-unitary gain in W is amplified exponentially. The phenomena of extreme growth or contraction of the gradient across time steps or layers are known as the exploding and the vanishing gradient problems, respectively. It is sufficient for RNNs to have ηt ≤ 1 at each time t to enable the possibility of vanishing gradients, typically for some large number of time steps T . The rate at which a gradient (or forward signal) vanishes depends on both the parameterization of the model and on the input data. The parameterization may be conditioned by placing appropriate constraints on W. It is worth keeping in mind that the Jacobian D is typically contractive, thus tending to be norm-reducing) and is also data-dependent, whereas W can vary from being contractive to norm-preserving, to expansive and applies the same gain on the forward signal as on the back-propagated gradient signal. 2 OUR APPROACH Vanishing and exploding gradients can be controlled to a large extent by controlling the maximum and minimum gain of W. The maximum gain of a matrix W is given by the spectral norm which is given by ||W||2 = max [ ||Wx|| ||x|| ] . (6) By keeping our weight matrix W close to orthogonal, one can ensure that it is close to a normpreserving transformation (where the spectral norm is equal to one, but the minimum gain is also one). One way to achieve this is via a simple soft constraint or regularization term of the form: λ ∑ i ||WTi Wi − I||2. (7) However, it is possible to formulate a more direct parameterization or factorization for W which permits hard bounds on the amount of expansion and contraction induced by W. This can be achieved by simply parameterizing W according to its singular value decomposition, which consists of the composition of orthogonal basis matrices U and V with a diagonal spectral matrix S containing the singular values which are real and positive by definition. We have W = USVT . (8) Since the spectral norm or maximum gain of a matrix is equal to its largest singular value, this decomposition allows us to control the maximum gain or expansivity of the weight matrix by controlling the magnitude of the largest singular value. Similarly, the minimum gain or contractivity of a matrix can be obtained from the minimum singular value. We can keep the bases U and V orthogonal via geodesic gradient descent along the set of weights that satisfy UTU = I and VTV = I respectively. The submanifolds that satisfy these constraints are called Stiefel manifolds. We discuss how this is achieved in more detail below, then discuss our construction for bounding the singular values. During optimization, in order to maintain the orthogonality of an orthogonally-initialized matrix M, i.e. where M = U, M = V or M = W if so desired, we employ a Cayley transformation of the update step onto the Stiefel manifold of (semi-)orthogonal matrices, as in Nishimori (2005) and Tagare (2011). Given an orthogonally-initialized parameter matrix M and its Jacobian, G with respect to the objective function, an update is performed as follows: A = GMT −MGT Mnew = M+ (I+ η 2 A)−1(I− η 2 A), (9) where A is a skew-symmetric matrix (that depends on the Jacobian and on the parameter matrix) which is mapped to an orthogonal matrix via a Cayley transform and η is the learning rate. While the update rule in (9) allows us to maintain an orthogonal hidden to hidden transition matrix W if desired, we are interested in exploring the effect of stepping away from the Stiefel manifold. As such, we parameterize the transition matrix W in factorized form, as a singular value decomposition with orthogonal bases U and V updated by geodesic gradient descent using the Cayley transform approach above. If W is an orthogonal matrix, the singular values in the diagonal matrix S are all equal to one. However, in our formulation we allow these singular values to deviate from one and employ a sigmoidal parameterization to apply a hard constraint on the maximum and minimum amount of deviation. Specifically, we define a margin m around 1 within which the singular values must lie. This is achieved with the parameterization si = 2m(σ(pi)− 0.5) + 1, si ∈ {diag(S)}, m ∈ [0, 1]. (10) The singular values are thus restricted to the range [1−m, 1 +m] and the underlying parameters pi are updated freely via stochastic gradient descent. Note that this parameterization strategy also has implications on the step sizes that gradient descent based optimization will take when updating the singular values – they tend to be smaller compared to models with no margin constraining their values. Specifically, a singular value’s progression toward a margin is slowed the closer it is to the margin. The sigmoidal parameterization can also impart another effect on the step size along the spectrum which needs to be accounted for. Considering 10, the gradient backpropagation of some loss L toward parameters pi is found as dL dpi = dsi dpi dL dsi = 2m dσ(pi) dpi dL dsi . (11) From (11), it can be seen that the magnitude of the update step for pi is scaled by the margin hyperparameter m . This means for example that for margins less than one, the effective learning rate for the spectrum is reduced in proportion to the margin. Consequently, we adjust the learning rate along the spectrum to be independent of the margin by renormalizing it by 2m . This margin formulation both guarantees singular values lie within a well defined range and slows deviation from orthogonality. Alternatively, one could enforce the orthogonality of U and V and impose a regularization term corresponding to a mean one Gaussian prior on these singular values. This encourages the weight matrix W to be norm preserving with a controllable strength equivalent to the variance of the Gaussian. We also explore this approach further below. 3 EXPERIMENTS In this section, we explore hard and soft orthogonality constraints on factorized weight matrices for recurrent neural network hidden to hidden transitions. With hard orthogonality constraints on U and V, we investigate the effect of widening the spectral margin or bounds on convergence and performance. Loosening these bounds allows increasingly larger margins within which the transition matrix W can deviate from orthogonality. We confirm that orthogonal initialization is useful as noted in Henaff et al. (2016), and we show that although strict orthogonality guarantees stable gradient norm, loosening orthogonality constraints can increase the rate of gradient descent convergence. We begin our analyses on tasks that are designed to stress memory: a sequence copying task and a basic addition task (Hochreiter & Schmidhuber, 1997). We then move on to tasks on real data that require models to capture long-range dependencies: digit classification based on sequential and permuted MNIST vectors (Le et al., 2015; LeCun et al., 1998). Finally, we look at a basic language modeling task using the Penn Treebank dataset (Marcus et al., 1993). The copy and adding tasks, introduced by Hochreiter & Schmidhuber (1997), are synthetic benchmarks with pathologically hard long distance dependencies that require long-term memory in models. The copy task consists of an input sequence that must be remembered by the network, followed by a series of blank inputs terminated by a delimiter that denotes the point at which the network must begin to output a copy of the initial sequence. We use an input sequence of T + 20 elements that begins with a sub-sequence of 10 elements to copy, each containing a symbol ai ∈ {a1 , ..., ap} out of p = 8 possible symbols. This sub-sequence is followed by T − 1 elements of the blank category a0 which is terminated at step T by a delimiter symbol ap+1 and 10 more elements of the blank category. The network must learn to remember the initial 10 element sequence for T time steps and output it after receiving the delimiter symbol. The goal of the adding task is to add two numbers together after a long delay. Each number is randomly picked at a unique position in a sequence of length T . The sequence is composed of T values sampled from a uniform distribution in the range [0, 1), with each value paired with an indicator value that identifies the value as one of the two numbers to remember (marked 1) or as a value to ignore (marked 0). The two numbers are positioned randomly in the sequence, the first in the range [0, T2 − 1] and the second in the range [ T 2 , T − 1], where 0 marks the first element. The network must learn to identify and remember the two numbers and output their sum. The sequential MNIST task from Le et al. (2015), MNIST digits are flattened into vectors that can be traversed sequentially by a recurrent neural network. The goal is to classify the digit based on the sequential input of pixels. The simple variant of this task is with a simple flattening of the image matrices; the harder variant of this task includes a random permutation of the pixels in the input vector that is determined once for an experiment. The latter formulation introduces longer distance dependencies between pixels that must be interpreted by the classification model. The English Penn Treebank (PTB) dataset from Marcus et al. (1993) is an annotated corpus of English sentences, commonly used for benchmarking language models. We employ a sequential character prediction task: given a sentence, a recurrent neural network must predict the next character at each step, from left to right. We use input sequences of variable length, with each sequence containing one sentence. We model 49 characters including lowercase letters (all strings are in lowercase), numbers, common punctuation, and an unknown character placeholder. In our experiments on two subsets of the data: in the first, we first use 23% of the data with strings with up to 75 characters and in the second we include over 99% of the dataset, picking strings with up to 300 characters. 3.1 LOOSENING HARD ORTHOGONALITY CONSTRAINTS In this section, we experimentally explore the effect of loosening hard orthogonality constraints through loosening the spectral margin defined above for the hidden to hidden transition matrix. In all experiments, we employed RMSprop (Tieleman & Hinton, 2012) when not using geodesic gradient descent. We used minibatches of size 50 and for generated data (the copy and adding tasks), we assumed an epoch length of 100 minibatches. We cautiously introduced gradient clipping at magnitude 100 (unless stated otherwise) in all of our RNN experiments although it may not be required and we consistently applied a small weight decay of 0.0001. Unless otherwise specified, we trained all simple recurrent neural networks with the hidden to hidden matrix factorization as in (8) using geodesic gradient descent on the bases (learning rate 10−6) and RMSprop on the other parameters (learning rate 0.0001), using a tanh transition nonlinearity, and clipping gradients of 100 magnitude. The neural network code was built on the Theano framework (Theano Development Team, 2016). When parameterizing a matrix in factorized form, we apply the weight decay on the composite matrix rather than on the factors in order to be consistent across experiments. For MNIST and PTB, test set metrics were computed based on the parameterization that gave the best validation set accuracy. 3.1.1 CONVERGENCE ON SYNTHETIC MEMORY TASKS For different sequence lengths T of the copy and adding tasks, we trained a factorized RNN with 128 hidden units and various spectral margins m . For the copy task, we used Elman networks without a transition non-linearity as in Henaff et al. (2016). We discuss our investigations into the use of a non-linearity on the copy task in the Appendix. As shown in Figure 1 we see an increase in the rate of convergence as we increase the spectral margin. This observation generally holds across the tested sequence lengths (T = 200, T = 500, T = 1000, T = 10000); however, large spectral margins hinder convergence on extremely long sequence lengths. At sequence length T = 10000, parameterizations with spectral margins larger than 0.001 converge slower than when using a margin of 0.001. In addition, the experiment without a margin failed to converge on the longest sequence length. This follows the expected pattern where stepping away from the Stiefel manifold may help with gradient descent optimization but loosening orthogonality constraints can reduce the stability of signal propagation through the network. For the adding task, we trained a factorized RNN on T = 1000 length sequences, using a ReLU activation function on the hidden to hidden transition matrix. The mean squared error (MSE) is shown for different spectral margins in Figure 5 in the Appendix. Testing spectral margins m = 0, m = 1, m = 10, m = 100, and no margin, we find that the models with the purely orthogonal (m = 0) and the unconstrained (no margin) transition matrices failed to begin converging beyond baseline MSE within 2000 epochs. 3.1.2 PERFORMANCE ON REAL DATA Having confirmed that an orthogonality constraint can negatively impact convergence rate, we seek to investigate the effect on model performance for tasks on real data. We show the results of experiments on permuted sequential MNIST in Table 2 and ordered sequential MNIST in Table 1. The loss curves are shown in Figure 6 in the Appendix and reveal an increased convergence rate for larger spectral margins. We trained the factorized RNN models with 128 hidden units for 120 epochs. We also trained an LSTM with 128 hidden units on both tasks for 150 epochs, configured with peephole connections, orthogonally initialized (and forget gate bias initialized to one), and trained with RMSprop (learning rate 0.0001, clipping gradients of magnitude 1). We show the results of experiments on PTB character prediction, in terms of bits per character (bpc) and prediction accuracy, for a subset of short sequences (up to 75 characters; 23% of data) in Table 3 and for a subset of long sequences (up to 300 characters; 99% of data) in Table 4. We trained factorized RNN models with 512 hidden units for 200 epochs with geodesic gradient descent on the bases (learning rate 10−6) and RMSprop on the other parameters (learning rate 0.001), using a tanh transition nonlinearity, and clipping gradients of 30 magnitude. Interestingly, for both the ordered and permuted sequential MNIST tasks, models with a non-zero margin significantly outperform those that are constrained to have purely orthogonal transition matri- ces (margin of zero). The best results on both the ordered and sequential MNIST tasks were yielded by models with a spectral margin of 0.1, at 94.10% accuracy and 91.44% accuracy, respectively. An LSTM outperformed the RNNs in both tasks; nevertheless, RNNs with hidden to hidden transitions initialized as orthogonal matrices performed admirably without a memory component and without all of the additional parameters associated with gates. Indeed, orthogonally initialized RNNs performed almost on par with the LSTM in the permuted sequential MNIST task which presents longer distance dependencies than the ordered task. Although the optimal margin appears to be 0.1, RNNs with large margins perform almost identically to an RNN without a margin, as long as the transition matrix is initialized as orthogonal. On these tasks, orthogonal initialization appears to significantly outperform Glorot normal initialization (Glorot & Bengio, 2010) or initializing the matrix as identity. It is interesting to note that for the MNIST tasks, orthogonal initialization appears useful while orthogonality constraints appear mainly detrimental. This suggests that while orthogonality helps early training by stabilizing gradient flow across many time steps, orthogonality constraints may need to be loosened on some tasks so as not to over-constrain the model’s representational ability. Curiously, larger margins and even models without sigmoidal constraints on the spectrum (no margin) performed well as long as they were initialized to be orthogonal, suggesting that evolution away from orthogonality is not a serious problem on MNIST. It is not surprising that orthogonality is useful for the MNIST tasks since they depend on long distance signal propagation with a single output at the end of the input sequence. On the other hand, character prediction with PTB produces an output at every time step. Constraining deviation from orthogonality proved detrimental for short sentences (Table 3) and beneficial when long sentences were included (Table 4). Furthermore, Glorot normal initialization did not perform worse than orthogonal initialization for PTB. Since an output is generated for every character in a sentence, short distance signal propagation is possible. Thus it is possible that the RNN is first learning very local dependencies between neighbouring characters and that given enough context, constraining deviation from orthogonality can help force the network to learn longer distance dependencies. 3.1.3 SPECTRAL AND GRADIENT EVOLUTION It is interesting to note that even long sequence lengths (T=1000) in the copy task can be solved efficiently with rather large margins on the spectrum. In Figure 2 we look at the gradient propagation of the loss from the last time step in the network with respect to the hidden activations. We can see that for a purely orthogonal parameterization of the transition matrix (when the margin is zero), the gradient norm is preserved across time steps, as expected. We further observe that with increasing margin size, the number of update steps over which this norm preservation survives decreases, though surprisingly not as quickly as expected. Although the deviation of singular values from one should be slowed by the sigmoidal parameterizations, even parameterizations without a sigmoid (no margin) can be effectively trained for all but the longest sequence lengths. This suggests that the spectrum is not deviating far from orthogonality and that inputs to the hidden to hidden transitions are mostly not aligned along the dimensions of great- est expansion or contraction. We evaluated the spread of the spectrum in all of our experiments and found that indeed, singular values tend to stay well within their prescribed bounds and only reach the margin when using a very large learning rate that does not permit convergence. Furthermore, when transition matrices are initialized as orthogonal, singular values remain near one throughout training even without a sigmoidal margin for tasks that require long term memory (copy, adding, sequential MNIST). On the other hand, singular value distributions tend to drift away from one for PTB character prediction which may help explain why enforcing an orthogonality constraint can be helpful for this task, when modeling long sequences. Interestingly, singular values spread out less for longer sequence lengths (nevertheless, the T=10000 copy task could not be solved with no sigmoid on the spectrum). We visualize the spread of singular values for different model parameterizations on the permuted sequential MNIST task in Figure 3. Curiously, we find that the distribution of singular values tends to shift upward to a mean of approximately 1.05 on both the ordered and permuted sequential MNIST tasks. We note that in those experiments, a tanh transition nonlinearity was used which is contractive in both the forward signal pass and the gradient backward pass. An upward shift in the distribution of singular values of the transition matrix would help compensate for that contraction. Indeed, (Saxe et al., 2013) describe this as a possibly good regime for learning in deep neural networks. That the model appears to evolve toward this regime suggests that deviating from it may incur a cost. This is interesting because the cost function cannot take into account numerical issues such as vanishing or exploding gradients (or forward signals); we do not know what could make this deviation costly. That the transition matrix may be compensating for the contraction of the tanh is supported by further experiments: applying a 1.05 pre-activation gain appears to allow a model with a margin of 0 to nearly match the top performance reached on both of the MNIST tasks. Furthermore, when using the OPLU norm-preserving activation function (Chernodub & Nowicki, 2016), we found that orthogonally initialized models performed equally well with all margins, achieving over 90% accuracy on the permuted sequential MNIST task. Unlike orthgonally initialized models, the RNN on the bottom right of Figure 3 with Glorot normal initialized transition matrices, begins and ends with a wide singular spectrum. While there is no clear positive shift in the distribution of singular values, the mean value appears to very gradually increase for both the ordered and permuted sequential MNIST tasks. If the model is to be expected to positively shift singular values to compensate for the contractivity of the tanh nonlinearity, it is not doing so well for the Glorot-initialized case; however, this may be due to the inefficiency of training as a result of vanishing gradients, given that initialization. 3.2 EXPLORING SOFT ORTHOGONALITY CONSTRAINTS Having established that it may indeed be useful to step away from orthogonality, here we explore two forms of soft constraints (rather than hard bounds as above) on hidden to hidden transition matrix orthogonality. The first is a simple penalty that directly encourages a transition matrix W to be orthogonal, of the form λ||WTW − I||22. This is similar to the orthogonality penalty introduced by Henaff et al. (2016). In the first two subfigures on the left of Figure 4, we explore the effect of weakening this form of regularization. We trained both a regular non-factorized RNN on the T = 200 copy task and a factorized RNN with orthogonal bases on the T = 500 copy task. For the regular RNN, we had to reduce the learning rate to 10−5. Here again we see that weakening the strength of the orthogonality-encouraging penalty can increase convergence speed. The second approach we explore replaces the sigmoidal margin parameterization with a mean one Gaussian prior on the singular values. In the two right subfigures of Figure 4, we visualize the accuracy on the length 200 copy task, using geoSGD (learning rate 10−6) to keep U and V orthogonal and different strengths of a Gaussian prior with mean one on the singular values. We trained these experiments with regular SGD on the spectrum and other non-orthogonal parameter matrices, using a 10−5 learning rate. We see that priors which are too strong lead to slow convergence. Loosening the strength of the prior makes the optimization more efficient. Furthermore, we compare a direct parameterization of the spectrum (no sigmoid) in Figure 4 with a sigmoidal parameterization, using a large margin of 1. Without the sigmoidal parameterization, optimization quickly becomes unstable; on the other hand, the optimization also becomes unstable if the prior is removed completely in the sigmoidal formulation (margin 1). These results further motivate the idea that parameterizations that deviate from orthogonality may perform better than purely orthogonal ones, as long as they are sufficiently constrained to avoid instability during training. 4 CONCLUSIONS We have explored a number of methods for controlling the expansivity of gradients during backpropagation based learning in RNNs through manipulating orthogonality constraints and regularization on matrices. Our experiments indicate that while orthogonal initialization may be beneficial, maintaining constraints on orthogonality can be detrimental. Indeed, moving away from hard constraints on matrix orthogonality can help improve optimization convergence rate and model performance. However, we also observe with synthetic tasks that relaxing regularization which encourages the spectral norms of weight matrices to be close to one, or allowing bounds on the spectral norms of weight matrices to be too wide, can reverse these gains and may lead to unstable optimization. ACKNOWLEDGMENTS We thank the Natural Sciences and Engineeering Research Council (NSERC) of Canada and Samsung for supporting this research. 5 APPENDIX 5.1 ADDITIONAL FIGURES 5.2 COPY TASK NONLINEARITY We found that nonlinearities such as a rectified linear unit (ReLU) (Nair & Hinton, 2010) or hyperbolic tangent (tanh) made the copy task far more difficult to solve. Using tanh, a short sequence length (T = 100) copy task required both a soft constraint that encourages orthogonality and thousands of epochs for training. It is worth noting that in the unitary evolution recurrent neural network of Arjovsky et al. (2015), the non-linearity (referred to as the ”modReLU”) is actually initialized as an identity operation that is free to deviate from identity during training. Furthermore, Henaff et al. (2016) derive a solution mechanism for the copy task that drops the non-linearity from an RNN. To explore this further, we experimented with a parametric leaky ReLU activation function (PReLU) which introduces a trainable slope α for negative valued inputs x , producing f (x ) = max(x , 0) + αmin(x , 0) (He et al., 2015). Setting the slope α to one would make the PReLU equivalent to an identity function. We experimented with clamping α to 0.5, 0.7 or 1 in a factorized RNN with a spectral margin of 0.3 and found that only the model with α = 1 solved the T = 1000 length copy task. We also experimented with a trainable slope α, initialized to 0.7 and found that it converges to 0.96, further suggesting the optimal solution for the copy task is without a transition nonlinearity. Since the copy task is purely a memory task, one may imagine that a transition nonlinearity such as a tanh or ReLU may be detrimental to the task as it can lose information. Thus, we also tried a recent activation function that preserves information, called an orthogonal permutation linear unit (OPLU) (Chernodub & Nowicki, 2016). The OPLU preserves norm, making a fully norm-preserving RNN possible. Interestingly, this activation function allowed us to recover identical results on the copy task to those without a nonlinearity for different spectral margins. 5.3 METHOD RUNNING TIME Although the method proposed in section 2 relies on a matrix inversion, an operation with O(n3) complexity for an n × n matrix, the running time of an RNN factorized in such a way actually remains reasonable. This running time is summarized in Table 5 and includes all computations in the graph, together with the matrix inversion. As this method is meant to be used only for the analysis in this work, we find the running times acceptable for that purpose. Models were run on an Nvidia GTX-770 GPU and were run against the T=100 length copy task.
1. What is the focus of the paper regarding RNN optimization? 2. What are the strengths of the proposed approach, particularly in its novelty and significance? 3. What are the weaknesses of the paper, especially regarding the experiment section? 4. Do you have any concerns or questions about the results and their interpretation? 5. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
Review
Review Vanishing and exploding gradients makes the optimization of RNNs very challenging. The issue becomes worse on tasks with long term dependencies that requires longer RNNs. One of the suggested approaches to improve the optimization is to optimize in a way that the transfer matrix is almost orthogonal. This paper investigate the role of orthogonality on the optimization and learning which is very important. The writing is sound and clear and arguments are easy to follow. The suggested optimization method is very interesting. The main shortcoming of this paper is the experiments which I find very important and I hope authors can update the experiment section significantly. Below I mention some comments on the experiment section: 1- I think the experiments are not enough. At the very least, report the result on the adding problem and language modeling task on Penn Treebank. 2- I understand that the copying task becomes difficult with non-lineary. However, removing non-linearity makes the optimization very different and therefore, it is very hard to conclude anything from the results on the copying task. 3- I was not able to find the number of hidden units used for RNNs in different tasks. 4- Please report the running time of your method in the paper for different numbers of hidden units, compare it with the SGD and mention the NN package you have used. 5- The results on Table 1 and Table 2 might also suggest that the orthogonality is not really helpful since even without a margin, the numbers are very close compare to the case when you find the optimal margin. Am I right? 6- What do we learn from Figure 2? It is left without any discussion.
ICLR
Title Learning to Observe with Reinforcement Learning Abstract We consider a decision making problem where an autonomous agent decides on which actions to take based on the observations it collects from the environment. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity) and the dependence of this on the state of agent (such as at the bottom versus top of a hill). We approach this problem by associating a cost with collecting observations which increases with the accuracy. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. By uncovering the relative usefulness of different types of observations and trade-offs within, these results also provide insights for further design of active data acquisition schemes. N/A We consider a decision making problem where an autonomous agent decides on which actions to take based on the observations it collects from the environment. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity) and the dependence of this on the state of agent (such as at the bottom versus top of a hill). We approach this problem by associating a cost with collecting observations which increases with the accuracy. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. By uncovering the relative usefulness of different types of observations and trade-offs within, these results also provide insights for further design of active data acquisition schemes. 1 INTRODUCTION Autonomous decision making relies on collecting data, i.e. observations, from the environment where the actions are decided based on the observations. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity). Revealing this structure is challenging since the usefulness of the information that an observation can bring is a priori unknown and depends on the environment as well as the current knowledge state of the decision-maker, for instance, whether the agent is at the bottom versus the top of a hill and how sure the agent is about its position. Hence, we’re interested in questions such as “Instead of collecting all available observations, is it possible to skip some observations and obtain satisfactory performance?”, “Which observation components (such as the position or the velocity) are the most useful when the object is far away from (or close to) the target state?”. The primary aim of this work is to reveal this information structure of the observation space within a systematic framework. We approach this problem by associating a cost with collecting observations which increases with the accuracy. The agent can choose the accuracy level of its observations. Since cost increases with the accuracy, we expect that the agent will choose to collect only the observations which are most likely to be informative and worth the cost. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. 2 RELATED WORK A related setting is active learning (Settles, 2010; Donmez et al., 2010) where an agent decides which queries to perform, i.e., which samples to take, during training. For instance, in an active learning set-up, an agent learning to classify images can decide which images from a large dataset it would like to have labels for in order to have improved classification performance. In a standard active learning approach (Settles, 2010; Donmez et al., 2010) as well as its extensions in RL (Lopes et al., 2009), the main aim is to reduce the size of the training set, hence the agent tries to determine informative queries during training so that the performance during the test phase is optimal. In the test phase, the agent cannot ask any questions; instead, it will answer questions, for instance, it will be given images to label. In contrast, in our setting the agent continues to perform queries during the test phase, since it still needs to collect observations during the test phase, for instance as in the case of collecting camera images for an autonomous driving application. From this perspective, one of our main aims is to reduce the number of queries the agent performs during this actual operation as opposed to number of queries in its training phase. Another related line of work consists of the RL approaches that facilitate efficient exploration of state space, such as curiosity-driven RL and intrinsic motivation (Pathak et al., 2017; Bellemare et al., 2016; Mohamed & Rezende, 2015; Still & Precup, 2012) or active-inference based methods utilizing free-energy (Ueltzhöffer, 2018; Schwöbel et al., 2018); and the works that focus on operation with limited data using a model (Chua et al., 2018; Deisenroth & Rasmussen, 2011; Henaff et al., 2018; Gal et al., 2016). In these works, the focus is either finding informative samples (Pathak et al., 2017) or using a limited number of samples/trials as much as possible by making use of a forward dynamics model (Boedecker et al., 2014; Chua et al., 2018; Deisenroth & Rasmussen, 2011; Henaff et al., 2018; Gal et al., 2016) during the agent’s training. In contrast to these approaches, we would like to decrease the effective size of the data or the number of samples taken during the test phase, i.e. operation of the agent after the training phase is over. Representation learning for control and RL constitutes another line of related work (Watter et al., 2015; Hafner et al., 2019; Banijamali et al., 2018). In these works, the transformation of the observation space to a low-dimensional space is investigated so that action selection can be performed using this low-dimensional space. Similar to these works, our framework can be also interpreted as a transformation of the original observation space where an effectively low-dimensional space is sought after. Instead of allowing a general class of transformations on the observations, here we consider a constrained setting so that only specific operations are allowed, for instance, we allow dropping some of the samples but we do not allow collecting observations and then applying arbitrary transformations on them. Our work associates a cost with obtaining observations. Cost of data acquisition in the context of Markov decision processes (MDPs) has been considered in a number of works, both as a direct cost on the observations (Hansen, 1997; Zubek & Dietterich, 2000; 2002) or as an indirect cost of information sharing in multiple agent settings (Melo & Veloso, 2009; De Hauwere et al., 2010). Another related line of work is performed under the umbrella of configurable MDPs (Metelli et al., 2018; Silva et al., 2019) where the agent can modify the dynamics of the environment. Although in our setting, it is the accuracy of the observations rather than the dynamics of the environment that the agent can modify, in some settings our work can be also interpreted as a configurable MDP. We further discuss this point in Section 4.2. 3 PROPOSED FRAMEWORK AND THE SOLUTION APPROACH 3.1 PRELIMINARIES Consider a Markov decision process given by 〈S,A,P, R, Ps0 , γ〉 where S is the state space, A is the set of actions, P : S ×A× S → R denotes the transition probabilities, R : S ×A → R denotes the bounded reward function, Ps0 : S → R denotes the probability distribution over the initial state and γ ∈ (0, 1] is the discount factor. The agent, i.e. the decision maker, observes the state of the system st at time t and decides on its action at based on its policy π(s, a). The policy mapping of the agent π(s, a) : S × A → [0, 1] is possibly stochastic and gives the probability of taking the action a at the state s. After the agent implements the action at, it receives a reward r(st, at) and the environment moves to the next state st+1 which is governed by P and depends on at and st. The aim of the RL agent is to learn an optimal policy mapping π(s, a) so that the expected return, i.e. expected cumulative discounted reward, J(π) = Eat∼π,st∼P [ ∑ t γ tr(st, at)] is maximized. 3.2 PARTIAL OBSERVABILITY Although most RL algorithms are typically expressed in terms of MDPs, in typical real-life applications the states are not directly observable, i.e., the observations only provide partial, possibly inaccurate information. For instance, consider a vehicle which uses the noisy images with limited angle-of-view obtained from cameras mounted on the vehicle for autonomous-driving decisions. In such scenarios, the data used by the agent to make decisions is not a direct representation of the state of the world. Hence, we consider a partially observable Markov decision process (POMDP) where the above MDP is augmented by O and Po where O represents the set of observations and Po : S → O represents the observation probabilities. Accordingly, the policy mapping is now expressed as π(o, a) : O ×A → [0, 1]. The observation vector at time t is given by ot = [o1t ; . . . ; o n t ] ∈ Rn, where n is the dimension of the observation vector. The observations are governed by ot ∼ po(ot|st;βt) (1) where po(ot|st;βt) denotes the conditional probability distribution function (pdf) of ot given st and is parametrized by the accuracy vector βt = [β 1 t ; . . . ;β n t ] ∈ Rn (2) The parameter βit ≥ 0 represents the average accuracy of the observation component i at time step t, i.e. oit. For instance, say we have two observations, position o 1 and velocity o2. Then, β1t denotes the accuracy of the position and β2t denotes the accuracy of the velocity. As β i t increases, the accuracy of the observation oit decreases. Given st and βt, the observations are statistically independent, i.e. we have the factorization po(ot|st;βt) = ∏ i=1,...,n poi(o i t|st;βit) (3) where poi(oit|st;βit) denotes the conditional pdf of oit given st and βit . Note that βit determines the average accuracy, i.e. the accuracy in the statistical sense. We provide an example below: Example: Consider the common Gaussian additive noise model with oit = s i t + v i t, i = 1, . . . , n, (4) where st = [s1t ; . . . ; s n t ] ∈ Rn is the state vector and vt = [v1t ; . . . ; vnt ] ∈ Rn is the Gaussian noise vector with N (0,diag(σ2 vit )). Here, vt and vt′ are statistically independent (stat. ind.) for all t 6= t′ and also vt and st′ are stat. ind. for all t, t′. Under this observation model, a reasonable choice for βit is β i t = σ 2 vit . Hence, we parametrize pio(.) as p i o(o i t|sit;βit) = N (sit, βit = σ2vit). Note that the parametrization in terms of βit can be done in multiple ways, for instance, one may also adopt βit = σvit . 3.3 DECISION MAKER CHOOSES THE ACCURACY OF THE OBSERVATIONS The agent can choose βit , hence β i t is a decision variable. Observations have a cost which increases with increasing accuracy, i.e. the cost increases with decreasing βit . • In Scenario A, the agent can vary βit on a continuous scale, i.e. βit ∈ [0,∞]. • In Scenario B, the agent chooses between i) collecting all the observations with a fixed level of accuracy or ii) not getting any of them at all. This setting corresponds to the case with βt = β̄t1, β̄t ∈ {βF ,∞}, where 1 ∈ Rn denotes the vector of ones. Here βF ≥ 0 represents a fixed accuracy level. Note that βF can be zero, corresponding to the case ot = st. Remark 3.1 Our proposed setting can be interpreted as a constrained representation learning problem for RL. In particular, consider the problem of learning the best mapping h(.) with zt = h(ōt) (5) from the high-dimensional original observations ōt to some new possibly low-dimensional variables zt so that control can be performed reliably on zt instead of ōt. Such settings have been utilized in various influential work, see for instance E2C approach of Watter et al. (2015). The proposed approach can be also formulated in a representation framework. In particular, we interpret the possibly noisy observations ot as the effectively low-dimensional representation zt used in (5). Hence, consider the mapping h̄(.) ot = h̄(ōt), (6) where ot and ōt denote the noisy and the original measurements, respectively. Compared to (5), the family of the mappings allowed in (6) is constrained, i.e. one can only adjust the accuracy parameter instead of using arbitrary transformations from ōt to ot. Here, ot is effectively lowdimensional compared to ōt because i) noise decreases the dynamic range and allows effectively higher compression rates for the data (Scenario A); or ii) the total number of observations acquired is smaller (Scenario B). Note that not all transformations from st to ot can be written using (6) as an intermediate step. From this perspective, the formulation in (1) can be said to be more general than (6). 3.3.1 MOTIVATION The primary motivation behind the proposed framework is to reveal the inherent nature of the observation space in terms of usefulness of information the observations provide with respect to the task at hand. The secondary motivation is to provide a RL framework for solving decision making problems when the observations have a cost associated with them. In regard to the first task, we note the following: To reveal this information structure, we associate an artificial cost with the observations that increase with the accuracy. Hence, only the observation components (or the observation vectors) which are mostly likely to be informative and worth the cost will be collected. This decision heavily depends on the state that the agent believes itself to be in. For instance, in the case of balancing an object at an unstable state (such as pendulum in OpenAi Gym (Brockman et al., 2016)), we intuitively expect that the agent does not need accurate measurements when it is far away from the target state. Hence, we’re interested in questions such as “Is it possible to skip some observations and obtain satisfactory performance?”, “Which observation components (such as the position or the velocity) are most useful when the object is far away from (or close to) the target state?”, “How are these results affected by the possible discrepancy between the true state the agent is in and the one that it believes it to be in due to noisy or skipped observations?”. The proposed framework reveals this information structure within a systematic setting. In regard to the second task, we note that there are many practical problems where there is a cost associated with acquiring observations (Hansen, 1997; Zubek & Dietterich, 2000; 2002), for instance consider the expensive medical tests (i.e. observations) that have to performed to diagnose a certain disease (Zubek & Dietterich, 2002) and wireless communications where there is a cost associated with channel usage (i.e. the right to use a communication channel) and a power cost that increases with the reliability of communications (Goldsmith, 2005; Cover & Thomas, 1991), see also Section A.1. The proposed framework can be used to find efficient observation strategies in such problems and to quantify the possible performance degradation due to the observation cost. Examples: The proposed scenarios A and B also correspond to practical data acquisition schemes. We now give some examples: An example for Scenario A is the case where the observations are obtained using different sensors on the device where the accuracy of each sensor can be individually adjusted. Another example is the case where the sensors are distributed over the environment and the readings of the sensors has to be relayed to central decision unit using individual compression of each observation type and wireless communications. Here, the compression and the wireless communication introduces an accuracy-cost trade-off where the agent can choose to operate at different points of. Please see Section A.1 for an example illustrating the accuracy-cost trade-off in wireless communications. An example for Scenario B is the remote control of a device, such as a drone, where all sensor readings of the device are compressed together and then sent to a decision unit. Since all readings are compressed and transmitted together, a decision of whether to transmit the whole observation vector or not has to be made, for instance due the limited power or wireless channel occupancy constraints. 3.4 REWARD SHAPING Reward shaping is a popular approach to direct RL agents towards a desired goal. Here, we want the agent not only move towards the original goal (which is encouraged by the original reward r), we also want it to learn to control βt. Hence, we propose reward shaping in the following form: r̃t = f(rt, βt) (7) where rt is the original reward, r̃t is the new modified reward and f(rt, βt) is a monotonically non-decreasing function of rt and βit , ∀i. Hence, the agent not only tries to maximize the average of the original reward but it also tries to maximize the “inaccuracy” of the measurements. This can be equivalently interpreted as minimizing the cost due to accurate measurements. In the case where there is a direct cost function ci(.) that increases with the accuracy of the observation oi (see, for instance, the example in Section A.1 where transmission power can be interpreted as the direct cost), the following additive form can be used r̃t = rt − λ n∑ i=1 ci(βit), (8) where ci(βit) is a non-increasing function of β i t and λ ≥ 0 is a weighting parameter. Hence, the agent’s aim is to maximize the original reward as well as minimize the cost of the observations. 4 EXPERIMENTS 4.1 SETTING Observation Models: We consider the following environments from the OpenAI Gym (Brockman et al., 2016): MountainCarContinuous-v0, Pendulum-v0, CartPole-v1. In this section, we illustrate how the modified environment with noisy observations is obtained for MountainCarContinuous-v0. The details and the parameter values for the other environments can be found in the Appendix A.2. We also consider a version of MountainCarContinuous-v0 with observations of the vertical position, which is presented in Section A.4. We first explain Scenario A, and then Scenario B. The original observations of the mountain car environment are the position xt and the velocity ẋt. In our framework, the agent has access to noisy versions of these original observations x̃t = xt +Qx ×∆xt(β1t ), (9a) ˜̇xt = ẋt +Qẋ ×∆ẋt(β2t ), (9b) where ∆xt(β1t ) ∼ U(−β1t , β1t ), ∆ẋt(β2t ) ∼ U(−β2t , β2t ) and U(−β, β) denotes the uniform distribution over [−β, β]. The noise variables are stat. ind., in particular ∆xt(β1t ) and ∆ẋt(β2t ) are stat. ind. from each other and also stat. ind. over time. Here, Qx and Qẋ determine the ranges of the noise levels and they are set as the 0.1 times of the full range of the corresponding observation, i.e., Qx = 0.18 and Qẋ = 0.014. Our agent chooses βit ∈ [0, 1] in addition to the original action of the environment, i.e. the force at that would be exerted on the car. The original reward of the environment per step is given by rt = −0.1× a2t . The reward is shaped using an additive model r̃t = rt + κA × ( 1 n n∑ i=1 βit ) , (10) where n = 2 and κA > 0 is chosen as 5× 10−6. The original environment has also a termination reward which the agent gets when the car passes the target position at 0.45, which is also provided to our agent upon successful termination. In Scenario B, at each time instant we either have no observation or we obtain the original observation vector, i.e. x̃t = xt and ˜̇xt = ẋt. These cases correspond to β̄t =∞ and β̄t = 0, respectively. The reward function is given as r̃t = rt + κB × g(β̄t) where κB = 0.5; and g(β̄t) = −1 for β̄t = 0, and 0 otherwise. In the implementation, we have mapped∞ to 1, i.e. the decision variable is β̄t ∈ {0, 1}, hence β̄t = 1 corresponds to not obtaining a sample in Scenario B. RL algorithm: We adopt a deep RL setting, combining reinforcement learning with deep learning using the policy-based approach Trust Region Policy Optimization (TRPO) (Schulman et al., 2015; Hill et al., 2018). The parameters are kept constant for all experiments and are provided in Appendix A.3. For Scenario A, at each time step, noisy observations obtained at that time step are fed to the algorithm as the observations. For Scenario B, the last acquired observation is fed to the algorithm as the observation at that time step. Plots: Unless otherwise stated, all results are reported as averages (such as average cumulative rewards and average βit) using 1000 episodes. For the plots, observation space is mapped to a grid with uniform intervals. Averages are taken with respect to the number of visits to each given range of the observation state. For example, for Scenario A the average of βit when x̃t ∈ [−0.1,+0.1] is shown as one average value at the center 0. For Scenario B, we report the sample skip frequency, i.e. the number of times the agent decided not to acquire a new observation when the last observed state of the agent falls into a given interval, such as the average sample skip frequency for x̃ ∈ [−0.1,+0.1] is reported as one value at 0. In all 2-D plots, the color pink indicates there was no visit to that observation state. 4.2 OVERVIEW We benchmark our results against the performance of the agent that use the original observations, and trained using the same RL algorithm. The resulting average cumulative rewards in terms of rt are presented in Table 1. We present the reward corresponding only to the original task so that we can evaluate the success of the agent in this task. These results illustrate that the agent can learn to adjust the accuracy level and still obtain successful performance. For the Mountain car environment, all agents have the same average return and for the others, the agents working with the noisy/skipped observations have a slightly weaker performance but still achieve the task of bringing/keeping the pendulum/pole in a vertical position in a reasonable number of time steps. At first sight, it may be surprising that the agent can learn to perform these tasks satisfactorily even if we have not injected any memory to our algorithm, for instance when we only use the current noisy observations for Scenario A. On the other hand, note that in these environments the observations are either noisy versions of hidden states which govern the dynamics or they are closely related to them. From the point of the agent that treats the noisy observations as state this can be interpreted as a configurable MDP (Metelli et al., 2018; Silva et al., 2019) where the agent controls the noise of the dynamics. Hence, the task of the agent can be interpreted as adjusting the noise level in the dynamics which does not necessarily require usage of memory in the decision maker. We now focus on the data collection strategies chosen by the agent for the mountain car and pendulum environments. The results for the other environments are provided in the appendix. 4.3 MOUNTAIN CAR The chosen noise levels and the sample skip frequencies for the mountain car environment are presented in Figure 1-2. Note that in Figure 1c, we present the sample skip frequency with respect to the velocity and the position on the same plot, where the legend also gives the corresponding x-axis label. In the mountain car environment, the car starts randomly around position −0.5 and it has to first go in the reverse direction (corresponding to a negative velocity) to climb the hill located around position −1.25 in order to gain momentum and climb to hill at the right (corresponding to a positive velocity) and reach the target location 0.45 which is at the top of this hill. The results reflect some of the trade-offs in this strategy: Figure 1a shows that most noisy observations in position and velocity (Scenario A) are preferred around −0.5 (where the car position is initialized), and the most accurate samples are taken when the car is around position −1.2. This is the position where the car has to make sure that it has reached to the top of the left hill so that it has enough momentum to climb the right hill. In the case of the dependence of the noise level on the velocity, Figure 1b shows that accurate samples are preferred when the velocity has high positive values. We note that this is not the only viable observation strategy and there are multiple observation strategies that give approximately the same average return in the original task. These can be explored using different Q and κ values in our framework. Figure 1c shows that approximately half of the samples are dropped in Scenario B regardless of the observation state, suggesting a high inherent sampling rate in the environment. This difference in the behaviour with the noisy and skipped observations illustrates the fundamental difference in these frameworks. In the case of noisy observations, the agent has to discover that the observations are uncertain and counteract this uncertainty. On the other hand, when taking perfect observations are possible, as in the case of Scenario B, the agent can internalize the exact environment dynamics (since mountain car environment has no inherent noise in its observations) and determine its exact state using the previous observed state and its action. Comparing Figure 2a-2b with Figure 2c, we observe that in the case of noisy observations a larger part of observation space is visited, which is partly due the fact that the plots are drawn according to the observations acquired by the agent and not the true states. Note that this does not affect the performance in the original task, as illustrated in Table 1. 4.4 PENDULUM The results for the pendulum are presented in Figure 3-4. Here, the task is to keep the pendulum at a vertical position, corresponding to an angle of 0. Figure 3a and Figure 4a show that observations with low position (i.e. angle) noise (Scenario A) are preferred when the pendulum is close to the vertical position and has relatively small angular velocity. On the other hand, when the samples can be completely skipped (Scenario B), the agent skips a large ratio of the samples in this region, as shown in Figure 3c and Figure 4c. Note that the agent spends most of the episode in this target region in the vertical position. Here, the agent prefers noiseless samples since a noisy sample may cause the control policy to choose a wild movement which might destabilize the pendulum. On the other hand, the agent may safely skip some of the samples at the upright position as the last sample is very close to current one because the angular velocity is typically low. 5 DISCUSSION AND CONCLUSIONS We have proposed a framework for revealing the information structure of the observation space in a systematic manner. We have adopted a reinforcement learning approach which utilizes a cost function which increases with the accuracy of the observations. Our results uncover the relative usefulness of different types of observations and the trade-offs within; and provide insights for further design of active data acquisition schemes for autonomous decision making. Further discussion of our results and some research directions are as follows: • Our results illustrate that settings with the inaccurate observations and skipped observations should be treated differently since the type of uncertainty that the agent has to counteract in these settings are inherently different. • Strategies for processing of the noisy/skipped observations should be investigated. Questions such as the following arise: “ Should all the processing be off-loaded to the RL agent or should the pre-processing of observations be performed, similar to Kalman filtering in the case of linear control under linear state space models (Ljung, 1999)?”, “How does the answer to the former question depend on the RL approach, the environment and the observation models?” • Our results suggest that inherent sampling rate of some of the standard RL environments may be higher than needed (for instance, see the Mountain Car environment where on average one can skip one out of every two samples without affecting the performance), indicating yet-another reason why some of these environments are seen as unchallenging for most of the state-of-art RL algorithms. • We have provided a quantification of the sensitivity of the agent’s performance to noisy/skipped observations at different observation regions illustrating that this sensitivity can be quite different based on the observation region. Utilizing this information for supporting robust designs as well as preparing adversarial examples is an interesting line of future research. A APPENDIX A.1 EXAMPLE: WIRELESS COMMUNICATIONS We now provide a motivating example to illustrate how observations can have a cost that is increasing with the accuracy and the decision maker can choose this accuracy level. A standard model for single terminal wireless communications is the additive white Gaussian noise (AWGN) channel (Goldsmith, 2005; Cover & Thomas, 1991) yt = xt + vt (11) where xt represents the channel input (i.e. message at the transmitter ) at time t, yt represents the corresponding channel output (i.e. the observation at the receiver) and the white Gaussian random process vt represents the channel noise. The capacity of this channel, i.e. the maximum number of information bits that can be sent, is determined by the signal-to-noise ratio (SNR), i.e. the average power in xt divided by the average power in vt. In particular, the capacity is given by (Goldsmith, 2005; Cover & Thomas, 1991) C = log2(1 + Px Pv ) (12) where Px and Pv are the average power levels of xt and vt, respectively. Hence, the capacity increases with Px. On the other hand, one cannot use a very high value of Px since broadcasting at high power levels is costly. In particular, Px directly contributes to the actual power required by the transmitter. Note that Px controls the accuracy of the observations. In particular, by dividing both sides by √ Px, (11) can be equivalently represented as ȳt = x̄t + v̄t (13) where ȳt , 1√Px yt, x̄t , 1√ Px xt and v̄t , 1√Px vt. The average power of x̄t is 1 and average power of v̄t is Pv/Px. The SNR, and hence, the channel capacity are the same in (11) and (13) and hence these representations are equivalent for all relevant purposes. In particular, determining Px directly determines the effective noise level. With vt Gaussian, we have vt ∼ N (0, Pv). Hence, the conditional distribution of the observations ȳt is given by p(ȳt|x̄t) = N (x̄t, Pv/Px) where Pv/Px can be chosen as βt. Hence, as the accuracy of the observations increases (Pv/Px decreases ), the cost of the observations (Px) increases. In this context, several interesting questions that relates to the accuracy of the observations and the power cost can be posed, for instance how to distribute a certain total power budget Ptotal over channels yit = x i t + v i t with different intrinsic power levels Pvi . This example illustrates the basic premise of our problem setting in a practical scenario; a decision maker who can adjust the noise levels of the observations which has a cost associated with them. It also suggests that the constraints on the wireless communications constitute a general and potential hindrance in remote control applications. Consider a device that makes the observations and takes actions but gets its commands (i.e. decisions about which actions to take) from another decision unit, such as the control of a robot or a drone by a remotely run RL algorithm which is controlling a large number of such units. Here, it is beneficial to consider policies that can work with inaccurate observations since sending accurate measurements are costly from a power perspective, which will be particularly important for a device with a limited battery, such as a drone flying at a remote location. Similarly, if the wireless communication channel cannot be used at all times, for instance, due to the limited bandwidth available, RL methods that can utilize the limited communication resources efficiently and optimize performance under such conditions are needed. A.2 ENVIRONMENT PARAMETERS In this section, we provide the parameters for all the environments in the experiments that are used directly from OpenAI Gym. We also consider a vertical position version of MountainCarContinuousv0, which is explained in Section A.4. Consider a generic environment with the observation variables oit, where o i t denotes the i th observation variable at time t. The limited-accuracy observations õtt are obtained using õit = o i t +Q i ×∆oit(βit) (14) where ∆oit ∼ U(−βt, βt). We choose Q1 = 0.1 and Q2 = 0.2 for the Pendulum-v0, Qi = 0.2 for the CartPole-v1, and Qi = 0.1 for the MountainCarContinuous-v0. The ordering of the observations is the same with the ones provided in OpenAI Gym (Brockman et al., 2016). For instance, for MountainCarContinuous-v0, position and velocity correspond to o1 and o2, respectively. Note that indices start with i = 0 in OpenAI Gym whereas here we start with i = 1. The reward function under Scenario A is given by r̃t = rt + κA × ( 1 n n∑ i=1 βit ) , (15) where rt is the original reward and κA > 0. For Scenario B, it is given by r̃t = rt + κB × g(β̄t) where g(β̄t) = −1 for β̄t = 0, and 0 otherwise. The associated κ values for different environments are presented in Table 2. The scaling factor Q’s for the noise levels and κ values for the reward function are determined empirically by first fixing Q (as a percentage of the full range of the associated observation) and searching for κ values that provide satisfactory performance in the original task. Note that the rest of the values are determined by the specifications of the environments in OpenAI Gym. The results depend on the values of Q and κ. For instance, using larger κ puts a larger weight on the reward due to noise. Hence, the agent prioritizes the reward due to noise instead of the reward from the original environment and, for large enough κ values, the agent cannot learn to perform the original task. A.3 TRPO PARAMETERS The same TRPO parameters are used in all experiments. These are provided in Table 3. A.4 MOUNTAIN CAR WITH OBSERVATIONS OF THE VERTICAL POSITION To have a better understanding of the effect of partial observability, we have investigated the following modification on MountainCarContinuous-v0: Instead of the horizontal position, the agent uses the vertical position as the observation. Hence, the observations are given by ỹt = yt +Qy ×∆yt(β1t ), (16a) ˜̇xt = ẋt +Qẋ ×∆ẋt(β2t ), (16b) where the vertical position yt ∈ [0.1, 1] is given by yt = 0.45 sin(3xt) + 0.55 (Brockman et al., 2016) and ∆yt(β1t ) ∼ U(−β1t , β1t ) and ∆ẋt(β2t ) ∼ U(−β2t , β2t ). Note that due to sin(·) function, for most of the yt values in the range [0.1, 1], there are two possible horizontal position (xt) values. Hence, this environment constitutes a POMDP even without any observation noise. Similar to our experiments with the original environment, Qy and Qẋ are set as the 0.1 times of the full range of the corresponding observation, i.e., Qx = 0.09 and Qẋ = 0.014. As before, the reward is calculated with (10) with κA = 5× 10−6. The average return due to the original task is 93, hence the agent again learns to perform the original task successfully, see Table 1 for comparison. The chosen noise levels are presented in Figure 5-6. Comparing these results with Figure 1-2 where the agent takes the horizontal position observation, we observe that the general trend of the velocity noise with respect to the velocity are the same in both settings, i.e. decreasing as the agent moves from the negative velocities to positive velocities. Comparing Figure 5 with Figure 1, we observe that lower relative noise levels are preferred for the setting with the vertical location observations. A.5 ADDITIONAL RESULTS -CART POLE We now provide the results for the cart pole environment in Figure 7-10, which were not included in the main text due to page limitations. For the sake of brevity, the noise levels over observations pairs is only provided for the position noise levels whereas averages are provided for all observation types. −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 Pole Angular Velocity 0.0 0.2 0.4 0.6 0.8 1.0 No ise L ev el Sce. A, Cart Position Noise Sce. A, Cart Velocity Noise Sce. A, Pole Angle Noise Sce. A, Pole Angular Velocity Noise (a) Scenario A, noise vs angular velocity −2 −1 0 1 2 Cart Position 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Sa m pl e Sk ip Fr eq ue nc y −2 −1 0 1 2 Cart Velocity Cart Position Cart Velocity (b) Scenario B, sampling frequency vs position and velocity −0.20 −0.15 −0.10 −0.05 0.00 0.05 0.10 0.15 0.20 Pole Angle 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Sa m pl e Sk ip Fr eq ue nc y −2 −1 0 1 2 Pole Velocity Pole Angle Pole Angular Velocity (c) Scenario B, sampling frequency vs angle and angular frequency
1. What is the focus of the paper regarding incorporating observation costs into RL control problems? 2. What are the strengths and weaknesses of the proposed approach, particularly in its contribution to the scientific community? 3. Are there any concerns or questions regarding the experimental setup and the use of standard algorithms and test problems? 4. How does the reviewer assess the clarity and quality of the paper's content? 5. Are there any suggestions for improving the paper, such as citing relevant works or providing more detail in certain areas?
Review
Review The paper shows how to incorporate an observation cost into RL control problems to assess the inherent value of information in different domains. I found the paper fun, and well written/edited. However, I don't see much of a scientific contribution here. The paper says its aim work is to reveal the information structure in the observation space within a systematic framework. So, it's essentially a kind of "ML for scientific visualization" paper. The ML novelty appears small---standard algorithms and test problems are used. The paper isn't really evaluated from a scientific visualization perspective, so it's not clear that it is over the bar from that perspective. The light shed on some standard test problems ("decisions aren't that impactful when the pole is almost balanced", etc.) are nice, but not really impactful. Detailed comments: Related work: I think it would be appropriate to cite Valentina Bayer's "cost sensitive learning" work. I think there's also a "cost observable MDP" model that is very related. The earlier work isn't able to solve these problems as well as the current paper, but the model is very related. "towards to" -> "towards" "maximize average" -> "maximize the average"? Table 1: Use right justification for easier visual comparison. I'm confused about the state used in the experiments. It's a POMDP, so was there a recurrent network used? Were multiple steps available in the state representation? How were the RL algorithms able to represent and learn the strategy? "For Mountain car environment all" -> "For the mountain car environment, all" "following rise" -> "following arise" "the agents performance" -> "the agents' performance"?
ICLR
Title Learning to Observe with Reinforcement Learning Abstract We consider a decision making problem where an autonomous agent decides on which actions to take based on the observations it collects from the environment. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity) and the dependence of this on the state of agent (such as at the bottom versus top of a hill). We approach this problem by associating a cost with collecting observations which increases with the accuracy. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. By uncovering the relative usefulness of different types of observations and trade-offs within, these results also provide insights for further design of active data acquisition schemes. N/A We consider a decision making problem where an autonomous agent decides on which actions to take based on the observations it collects from the environment. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity) and the dependence of this on the state of agent (such as at the bottom versus top of a hill). We approach this problem by associating a cost with collecting observations which increases with the accuracy. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. By uncovering the relative usefulness of different types of observations and trade-offs within, these results also provide insights for further design of active data acquisition schemes. 1 INTRODUCTION Autonomous decision making relies on collecting data, i.e. observations, from the environment where the actions are decided based on the observations. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity). Revealing this structure is challenging since the usefulness of the information that an observation can bring is a priori unknown and depends on the environment as well as the current knowledge state of the decision-maker, for instance, whether the agent is at the bottom versus the top of a hill and how sure the agent is about its position. Hence, we’re interested in questions such as “Instead of collecting all available observations, is it possible to skip some observations and obtain satisfactory performance?”, “Which observation components (such as the position or the velocity) are the most useful when the object is far away from (or close to) the target state?”. The primary aim of this work is to reveal this information structure of the observation space within a systematic framework. We approach this problem by associating a cost with collecting observations which increases with the accuracy. The agent can choose the accuracy level of its observations. Since cost increases with the accuracy, we expect that the agent will choose to collect only the observations which are most likely to be informative and worth the cost. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. 2 RELATED WORK A related setting is active learning (Settles, 2010; Donmez et al., 2010) where an agent decides which queries to perform, i.e., which samples to take, during training. For instance, in an active learning set-up, an agent learning to classify images can decide which images from a large dataset it would like to have labels for in order to have improved classification performance. In a standard active learning approach (Settles, 2010; Donmez et al., 2010) as well as its extensions in RL (Lopes et al., 2009), the main aim is to reduce the size of the training set, hence the agent tries to determine informative queries during training so that the performance during the test phase is optimal. In the test phase, the agent cannot ask any questions; instead, it will answer questions, for instance, it will be given images to label. In contrast, in our setting the agent continues to perform queries during the test phase, since it still needs to collect observations during the test phase, for instance as in the case of collecting camera images for an autonomous driving application. From this perspective, one of our main aims is to reduce the number of queries the agent performs during this actual operation as opposed to number of queries in its training phase. Another related line of work consists of the RL approaches that facilitate efficient exploration of state space, such as curiosity-driven RL and intrinsic motivation (Pathak et al., 2017; Bellemare et al., 2016; Mohamed & Rezende, 2015; Still & Precup, 2012) or active-inference based methods utilizing free-energy (Ueltzhöffer, 2018; Schwöbel et al., 2018); and the works that focus on operation with limited data using a model (Chua et al., 2018; Deisenroth & Rasmussen, 2011; Henaff et al., 2018; Gal et al., 2016). In these works, the focus is either finding informative samples (Pathak et al., 2017) or using a limited number of samples/trials as much as possible by making use of a forward dynamics model (Boedecker et al., 2014; Chua et al., 2018; Deisenroth & Rasmussen, 2011; Henaff et al., 2018; Gal et al., 2016) during the agent’s training. In contrast to these approaches, we would like to decrease the effective size of the data or the number of samples taken during the test phase, i.e. operation of the agent after the training phase is over. Representation learning for control and RL constitutes another line of related work (Watter et al., 2015; Hafner et al., 2019; Banijamali et al., 2018). In these works, the transformation of the observation space to a low-dimensional space is investigated so that action selection can be performed using this low-dimensional space. Similar to these works, our framework can be also interpreted as a transformation of the original observation space where an effectively low-dimensional space is sought after. Instead of allowing a general class of transformations on the observations, here we consider a constrained setting so that only specific operations are allowed, for instance, we allow dropping some of the samples but we do not allow collecting observations and then applying arbitrary transformations on them. Our work associates a cost with obtaining observations. Cost of data acquisition in the context of Markov decision processes (MDPs) has been considered in a number of works, both as a direct cost on the observations (Hansen, 1997; Zubek & Dietterich, 2000; 2002) or as an indirect cost of information sharing in multiple agent settings (Melo & Veloso, 2009; De Hauwere et al., 2010). Another related line of work is performed under the umbrella of configurable MDPs (Metelli et al., 2018; Silva et al., 2019) where the agent can modify the dynamics of the environment. Although in our setting, it is the accuracy of the observations rather than the dynamics of the environment that the agent can modify, in some settings our work can be also interpreted as a configurable MDP. We further discuss this point in Section 4.2. 3 PROPOSED FRAMEWORK AND THE SOLUTION APPROACH 3.1 PRELIMINARIES Consider a Markov decision process given by 〈S,A,P, R, Ps0 , γ〉 where S is the state space, A is the set of actions, P : S ×A× S → R denotes the transition probabilities, R : S ×A → R denotes the bounded reward function, Ps0 : S → R denotes the probability distribution over the initial state and γ ∈ (0, 1] is the discount factor. The agent, i.e. the decision maker, observes the state of the system st at time t and decides on its action at based on its policy π(s, a). The policy mapping of the agent π(s, a) : S × A → [0, 1] is possibly stochastic and gives the probability of taking the action a at the state s. After the agent implements the action at, it receives a reward r(st, at) and the environment moves to the next state st+1 which is governed by P and depends on at and st. The aim of the RL agent is to learn an optimal policy mapping π(s, a) so that the expected return, i.e. expected cumulative discounted reward, J(π) = Eat∼π,st∼P [ ∑ t γ tr(st, at)] is maximized. 3.2 PARTIAL OBSERVABILITY Although most RL algorithms are typically expressed in terms of MDPs, in typical real-life applications the states are not directly observable, i.e., the observations only provide partial, possibly inaccurate information. For instance, consider a vehicle which uses the noisy images with limited angle-of-view obtained from cameras mounted on the vehicle for autonomous-driving decisions. In such scenarios, the data used by the agent to make decisions is not a direct representation of the state of the world. Hence, we consider a partially observable Markov decision process (POMDP) where the above MDP is augmented by O and Po where O represents the set of observations and Po : S → O represents the observation probabilities. Accordingly, the policy mapping is now expressed as π(o, a) : O ×A → [0, 1]. The observation vector at time t is given by ot = [o1t ; . . . ; o n t ] ∈ Rn, where n is the dimension of the observation vector. The observations are governed by ot ∼ po(ot|st;βt) (1) where po(ot|st;βt) denotes the conditional probability distribution function (pdf) of ot given st and is parametrized by the accuracy vector βt = [β 1 t ; . . . ;β n t ] ∈ Rn (2) The parameter βit ≥ 0 represents the average accuracy of the observation component i at time step t, i.e. oit. For instance, say we have two observations, position o 1 and velocity o2. Then, β1t denotes the accuracy of the position and β2t denotes the accuracy of the velocity. As β i t increases, the accuracy of the observation oit decreases. Given st and βt, the observations are statistically independent, i.e. we have the factorization po(ot|st;βt) = ∏ i=1,...,n poi(o i t|st;βit) (3) where poi(oit|st;βit) denotes the conditional pdf of oit given st and βit . Note that βit determines the average accuracy, i.e. the accuracy in the statistical sense. We provide an example below: Example: Consider the common Gaussian additive noise model with oit = s i t + v i t, i = 1, . . . , n, (4) where st = [s1t ; . . . ; s n t ] ∈ Rn is the state vector and vt = [v1t ; . . . ; vnt ] ∈ Rn is the Gaussian noise vector with N (0,diag(σ2 vit )). Here, vt and vt′ are statistically independent (stat. ind.) for all t 6= t′ and also vt and st′ are stat. ind. for all t, t′. Under this observation model, a reasonable choice for βit is β i t = σ 2 vit . Hence, we parametrize pio(.) as p i o(o i t|sit;βit) = N (sit, βit = σ2vit). Note that the parametrization in terms of βit can be done in multiple ways, for instance, one may also adopt βit = σvit . 3.3 DECISION MAKER CHOOSES THE ACCURACY OF THE OBSERVATIONS The agent can choose βit , hence β i t is a decision variable. Observations have a cost which increases with increasing accuracy, i.e. the cost increases with decreasing βit . • In Scenario A, the agent can vary βit on a continuous scale, i.e. βit ∈ [0,∞]. • In Scenario B, the agent chooses between i) collecting all the observations with a fixed level of accuracy or ii) not getting any of them at all. This setting corresponds to the case with βt = β̄t1, β̄t ∈ {βF ,∞}, where 1 ∈ Rn denotes the vector of ones. Here βF ≥ 0 represents a fixed accuracy level. Note that βF can be zero, corresponding to the case ot = st. Remark 3.1 Our proposed setting can be interpreted as a constrained representation learning problem for RL. In particular, consider the problem of learning the best mapping h(.) with zt = h(ōt) (5) from the high-dimensional original observations ōt to some new possibly low-dimensional variables zt so that control can be performed reliably on zt instead of ōt. Such settings have been utilized in various influential work, see for instance E2C approach of Watter et al. (2015). The proposed approach can be also formulated in a representation framework. In particular, we interpret the possibly noisy observations ot as the effectively low-dimensional representation zt used in (5). Hence, consider the mapping h̄(.) ot = h̄(ōt), (6) where ot and ōt denote the noisy and the original measurements, respectively. Compared to (5), the family of the mappings allowed in (6) is constrained, i.e. one can only adjust the accuracy parameter instead of using arbitrary transformations from ōt to ot. Here, ot is effectively lowdimensional compared to ōt because i) noise decreases the dynamic range and allows effectively higher compression rates for the data (Scenario A); or ii) the total number of observations acquired is smaller (Scenario B). Note that not all transformations from st to ot can be written using (6) as an intermediate step. From this perspective, the formulation in (1) can be said to be more general than (6). 3.3.1 MOTIVATION The primary motivation behind the proposed framework is to reveal the inherent nature of the observation space in terms of usefulness of information the observations provide with respect to the task at hand. The secondary motivation is to provide a RL framework for solving decision making problems when the observations have a cost associated with them. In regard to the first task, we note the following: To reveal this information structure, we associate an artificial cost with the observations that increase with the accuracy. Hence, only the observation components (or the observation vectors) which are mostly likely to be informative and worth the cost will be collected. This decision heavily depends on the state that the agent believes itself to be in. For instance, in the case of balancing an object at an unstable state (such as pendulum in OpenAi Gym (Brockman et al., 2016)), we intuitively expect that the agent does not need accurate measurements when it is far away from the target state. Hence, we’re interested in questions such as “Is it possible to skip some observations and obtain satisfactory performance?”, “Which observation components (such as the position or the velocity) are most useful when the object is far away from (or close to) the target state?”, “How are these results affected by the possible discrepancy between the true state the agent is in and the one that it believes it to be in due to noisy or skipped observations?”. The proposed framework reveals this information structure within a systematic setting. In regard to the second task, we note that there are many practical problems where there is a cost associated with acquiring observations (Hansen, 1997; Zubek & Dietterich, 2000; 2002), for instance consider the expensive medical tests (i.e. observations) that have to performed to diagnose a certain disease (Zubek & Dietterich, 2002) and wireless communications where there is a cost associated with channel usage (i.e. the right to use a communication channel) and a power cost that increases with the reliability of communications (Goldsmith, 2005; Cover & Thomas, 1991), see also Section A.1. The proposed framework can be used to find efficient observation strategies in such problems and to quantify the possible performance degradation due to the observation cost. Examples: The proposed scenarios A and B also correspond to practical data acquisition schemes. We now give some examples: An example for Scenario A is the case where the observations are obtained using different sensors on the device where the accuracy of each sensor can be individually adjusted. Another example is the case where the sensors are distributed over the environment and the readings of the sensors has to be relayed to central decision unit using individual compression of each observation type and wireless communications. Here, the compression and the wireless communication introduces an accuracy-cost trade-off where the agent can choose to operate at different points of. Please see Section A.1 for an example illustrating the accuracy-cost trade-off in wireless communications. An example for Scenario B is the remote control of a device, such as a drone, where all sensor readings of the device are compressed together and then sent to a decision unit. Since all readings are compressed and transmitted together, a decision of whether to transmit the whole observation vector or not has to be made, for instance due the limited power or wireless channel occupancy constraints. 3.4 REWARD SHAPING Reward shaping is a popular approach to direct RL agents towards a desired goal. Here, we want the agent not only move towards the original goal (which is encouraged by the original reward r), we also want it to learn to control βt. Hence, we propose reward shaping in the following form: r̃t = f(rt, βt) (7) where rt is the original reward, r̃t is the new modified reward and f(rt, βt) is a monotonically non-decreasing function of rt and βit , ∀i. Hence, the agent not only tries to maximize the average of the original reward but it also tries to maximize the “inaccuracy” of the measurements. This can be equivalently interpreted as minimizing the cost due to accurate measurements. In the case where there is a direct cost function ci(.) that increases with the accuracy of the observation oi (see, for instance, the example in Section A.1 where transmission power can be interpreted as the direct cost), the following additive form can be used r̃t = rt − λ n∑ i=1 ci(βit), (8) where ci(βit) is a non-increasing function of β i t and λ ≥ 0 is a weighting parameter. Hence, the agent’s aim is to maximize the original reward as well as minimize the cost of the observations. 4 EXPERIMENTS 4.1 SETTING Observation Models: We consider the following environments from the OpenAI Gym (Brockman et al., 2016): MountainCarContinuous-v0, Pendulum-v0, CartPole-v1. In this section, we illustrate how the modified environment with noisy observations is obtained for MountainCarContinuous-v0. The details and the parameter values for the other environments can be found in the Appendix A.2. We also consider a version of MountainCarContinuous-v0 with observations of the vertical position, which is presented in Section A.4. We first explain Scenario A, and then Scenario B. The original observations of the mountain car environment are the position xt and the velocity ẋt. In our framework, the agent has access to noisy versions of these original observations x̃t = xt +Qx ×∆xt(β1t ), (9a) ˜̇xt = ẋt +Qẋ ×∆ẋt(β2t ), (9b) where ∆xt(β1t ) ∼ U(−β1t , β1t ), ∆ẋt(β2t ) ∼ U(−β2t , β2t ) and U(−β, β) denotes the uniform distribution over [−β, β]. The noise variables are stat. ind., in particular ∆xt(β1t ) and ∆ẋt(β2t ) are stat. ind. from each other and also stat. ind. over time. Here, Qx and Qẋ determine the ranges of the noise levels and they are set as the 0.1 times of the full range of the corresponding observation, i.e., Qx = 0.18 and Qẋ = 0.014. Our agent chooses βit ∈ [0, 1] in addition to the original action of the environment, i.e. the force at that would be exerted on the car. The original reward of the environment per step is given by rt = −0.1× a2t . The reward is shaped using an additive model r̃t = rt + κA × ( 1 n n∑ i=1 βit ) , (10) where n = 2 and κA > 0 is chosen as 5× 10−6. The original environment has also a termination reward which the agent gets when the car passes the target position at 0.45, which is also provided to our agent upon successful termination. In Scenario B, at each time instant we either have no observation or we obtain the original observation vector, i.e. x̃t = xt and ˜̇xt = ẋt. These cases correspond to β̄t =∞ and β̄t = 0, respectively. The reward function is given as r̃t = rt + κB × g(β̄t) where κB = 0.5; and g(β̄t) = −1 for β̄t = 0, and 0 otherwise. In the implementation, we have mapped∞ to 1, i.e. the decision variable is β̄t ∈ {0, 1}, hence β̄t = 1 corresponds to not obtaining a sample in Scenario B. RL algorithm: We adopt a deep RL setting, combining reinforcement learning with deep learning using the policy-based approach Trust Region Policy Optimization (TRPO) (Schulman et al., 2015; Hill et al., 2018). The parameters are kept constant for all experiments and are provided in Appendix A.3. For Scenario A, at each time step, noisy observations obtained at that time step are fed to the algorithm as the observations. For Scenario B, the last acquired observation is fed to the algorithm as the observation at that time step. Plots: Unless otherwise stated, all results are reported as averages (such as average cumulative rewards and average βit) using 1000 episodes. For the plots, observation space is mapped to a grid with uniform intervals. Averages are taken with respect to the number of visits to each given range of the observation state. For example, for Scenario A the average of βit when x̃t ∈ [−0.1,+0.1] is shown as one average value at the center 0. For Scenario B, we report the sample skip frequency, i.e. the number of times the agent decided not to acquire a new observation when the last observed state of the agent falls into a given interval, such as the average sample skip frequency for x̃ ∈ [−0.1,+0.1] is reported as one value at 0. In all 2-D plots, the color pink indicates there was no visit to that observation state. 4.2 OVERVIEW We benchmark our results against the performance of the agent that use the original observations, and trained using the same RL algorithm. The resulting average cumulative rewards in terms of rt are presented in Table 1. We present the reward corresponding only to the original task so that we can evaluate the success of the agent in this task. These results illustrate that the agent can learn to adjust the accuracy level and still obtain successful performance. For the Mountain car environment, all agents have the same average return and for the others, the agents working with the noisy/skipped observations have a slightly weaker performance but still achieve the task of bringing/keeping the pendulum/pole in a vertical position in a reasonable number of time steps. At first sight, it may be surprising that the agent can learn to perform these tasks satisfactorily even if we have not injected any memory to our algorithm, for instance when we only use the current noisy observations for Scenario A. On the other hand, note that in these environments the observations are either noisy versions of hidden states which govern the dynamics or they are closely related to them. From the point of the agent that treats the noisy observations as state this can be interpreted as a configurable MDP (Metelli et al., 2018; Silva et al., 2019) where the agent controls the noise of the dynamics. Hence, the task of the agent can be interpreted as adjusting the noise level in the dynamics which does not necessarily require usage of memory in the decision maker. We now focus on the data collection strategies chosen by the agent for the mountain car and pendulum environments. The results for the other environments are provided in the appendix. 4.3 MOUNTAIN CAR The chosen noise levels and the sample skip frequencies for the mountain car environment are presented in Figure 1-2. Note that in Figure 1c, we present the sample skip frequency with respect to the velocity and the position on the same plot, where the legend also gives the corresponding x-axis label. In the mountain car environment, the car starts randomly around position −0.5 and it has to first go in the reverse direction (corresponding to a negative velocity) to climb the hill located around position −1.25 in order to gain momentum and climb to hill at the right (corresponding to a positive velocity) and reach the target location 0.45 which is at the top of this hill. The results reflect some of the trade-offs in this strategy: Figure 1a shows that most noisy observations in position and velocity (Scenario A) are preferred around −0.5 (where the car position is initialized), and the most accurate samples are taken when the car is around position −1.2. This is the position where the car has to make sure that it has reached to the top of the left hill so that it has enough momentum to climb the right hill. In the case of the dependence of the noise level on the velocity, Figure 1b shows that accurate samples are preferred when the velocity has high positive values. We note that this is not the only viable observation strategy and there are multiple observation strategies that give approximately the same average return in the original task. These can be explored using different Q and κ values in our framework. Figure 1c shows that approximately half of the samples are dropped in Scenario B regardless of the observation state, suggesting a high inherent sampling rate in the environment. This difference in the behaviour with the noisy and skipped observations illustrates the fundamental difference in these frameworks. In the case of noisy observations, the agent has to discover that the observations are uncertain and counteract this uncertainty. On the other hand, when taking perfect observations are possible, as in the case of Scenario B, the agent can internalize the exact environment dynamics (since mountain car environment has no inherent noise in its observations) and determine its exact state using the previous observed state and its action. Comparing Figure 2a-2b with Figure 2c, we observe that in the case of noisy observations a larger part of observation space is visited, which is partly due the fact that the plots are drawn according to the observations acquired by the agent and not the true states. Note that this does not affect the performance in the original task, as illustrated in Table 1. 4.4 PENDULUM The results for the pendulum are presented in Figure 3-4. Here, the task is to keep the pendulum at a vertical position, corresponding to an angle of 0. Figure 3a and Figure 4a show that observations with low position (i.e. angle) noise (Scenario A) are preferred when the pendulum is close to the vertical position and has relatively small angular velocity. On the other hand, when the samples can be completely skipped (Scenario B), the agent skips a large ratio of the samples in this region, as shown in Figure 3c and Figure 4c. Note that the agent spends most of the episode in this target region in the vertical position. Here, the agent prefers noiseless samples since a noisy sample may cause the control policy to choose a wild movement which might destabilize the pendulum. On the other hand, the agent may safely skip some of the samples at the upright position as the last sample is very close to current one because the angular velocity is typically low. 5 DISCUSSION AND CONCLUSIONS We have proposed a framework for revealing the information structure of the observation space in a systematic manner. We have adopted a reinforcement learning approach which utilizes a cost function which increases with the accuracy of the observations. Our results uncover the relative usefulness of different types of observations and the trade-offs within; and provide insights for further design of active data acquisition schemes for autonomous decision making. Further discussion of our results and some research directions are as follows: • Our results illustrate that settings with the inaccurate observations and skipped observations should be treated differently since the type of uncertainty that the agent has to counteract in these settings are inherently different. • Strategies for processing of the noisy/skipped observations should be investigated. Questions such as the following arise: “ Should all the processing be off-loaded to the RL agent or should the pre-processing of observations be performed, similar to Kalman filtering in the case of linear control under linear state space models (Ljung, 1999)?”, “How does the answer to the former question depend on the RL approach, the environment and the observation models?” • Our results suggest that inherent sampling rate of some of the standard RL environments may be higher than needed (for instance, see the Mountain Car environment where on average one can skip one out of every two samples without affecting the performance), indicating yet-another reason why some of these environments are seen as unchallenging for most of the state-of-art RL algorithms. • We have provided a quantification of the sensitivity of the agent’s performance to noisy/skipped observations at different observation regions illustrating that this sensitivity can be quite different based on the observation region. Utilizing this information for supporting robust designs as well as preparing adversarial examples is an interesting line of future research. A APPENDIX A.1 EXAMPLE: WIRELESS COMMUNICATIONS We now provide a motivating example to illustrate how observations can have a cost that is increasing with the accuracy and the decision maker can choose this accuracy level. A standard model for single terminal wireless communications is the additive white Gaussian noise (AWGN) channel (Goldsmith, 2005; Cover & Thomas, 1991) yt = xt + vt (11) where xt represents the channel input (i.e. message at the transmitter ) at time t, yt represents the corresponding channel output (i.e. the observation at the receiver) and the white Gaussian random process vt represents the channel noise. The capacity of this channel, i.e. the maximum number of information bits that can be sent, is determined by the signal-to-noise ratio (SNR), i.e. the average power in xt divided by the average power in vt. In particular, the capacity is given by (Goldsmith, 2005; Cover & Thomas, 1991) C = log2(1 + Px Pv ) (12) where Px and Pv are the average power levels of xt and vt, respectively. Hence, the capacity increases with Px. On the other hand, one cannot use a very high value of Px since broadcasting at high power levels is costly. In particular, Px directly contributes to the actual power required by the transmitter. Note that Px controls the accuracy of the observations. In particular, by dividing both sides by √ Px, (11) can be equivalently represented as ȳt = x̄t + v̄t (13) where ȳt , 1√Px yt, x̄t , 1√ Px xt and v̄t , 1√Px vt. The average power of x̄t is 1 and average power of v̄t is Pv/Px. The SNR, and hence, the channel capacity are the same in (11) and (13) and hence these representations are equivalent for all relevant purposes. In particular, determining Px directly determines the effective noise level. With vt Gaussian, we have vt ∼ N (0, Pv). Hence, the conditional distribution of the observations ȳt is given by p(ȳt|x̄t) = N (x̄t, Pv/Px) where Pv/Px can be chosen as βt. Hence, as the accuracy of the observations increases (Pv/Px decreases ), the cost of the observations (Px) increases. In this context, several interesting questions that relates to the accuracy of the observations and the power cost can be posed, for instance how to distribute a certain total power budget Ptotal over channels yit = x i t + v i t with different intrinsic power levels Pvi . This example illustrates the basic premise of our problem setting in a practical scenario; a decision maker who can adjust the noise levels of the observations which has a cost associated with them. It also suggests that the constraints on the wireless communications constitute a general and potential hindrance in remote control applications. Consider a device that makes the observations and takes actions but gets its commands (i.e. decisions about which actions to take) from another decision unit, such as the control of a robot or a drone by a remotely run RL algorithm which is controlling a large number of such units. Here, it is beneficial to consider policies that can work with inaccurate observations since sending accurate measurements are costly from a power perspective, which will be particularly important for a device with a limited battery, such as a drone flying at a remote location. Similarly, if the wireless communication channel cannot be used at all times, for instance, due to the limited bandwidth available, RL methods that can utilize the limited communication resources efficiently and optimize performance under such conditions are needed. A.2 ENVIRONMENT PARAMETERS In this section, we provide the parameters for all the environments in the experiments that are used directly from OpenAI Gym. We also consider a vertical position version of MountainCarContinuousv0, which is explained in Section A.4. Consider a generic environment with the observation variables oit, where o i t denotes the i th observation variable at time t. The limited-accuracy observations õtt are obtained using õit = o i t +Q i ×∆oit(βit) (14) where ∆oit ∼ U(−βt, βt). We choose Q1 = 0.1 and Q2 = 0.2 for the Pendulum-v0, Qi = 0.2 for the CartPole-v1, and Qi = 0.1 for the MountainCarContinuous-v0. The ordering of the observations is the same with the ones provided in OpenAI Gym (Brockman et al., 2016). For instance, for MountainCarContinuous-v0, position and velocity correspond to o1 and o2, respectively. Note that indices start with i = 0 in OpenAI Gym whereas here we start with i = 1. The reward function under Scenario A is given by r̃t = rt + κA × ( 1 n n∑ i=1 βit ) , (15) where rt is the original reward and κA > 0. For Scenario B, it is given by r̃t = rt + κB × g(β̄t) where g(β̄t) = −1 for β̄t = 0, and 0 otherwise. The associated κ values for different environments are presented in Table 2. The scaling factor Q’s for the noise levels and κ values for the reward function are determined empirically by first fixing Q (as a percentage of the full range of the associated observation) and searching for κ values that provide satisfactory performance in the original task. Note that the rest of the values are determined by the specifications of the environments in OpenAI Gym. The results depend on the values of Q and κ. For instance, using larger κ puts a larger weight on the reward due to noise. Hence, the agent prioritizes the reward due to noise instead of the reward from the original environment and, for large enough κ values, the agent cannot learn to perform the original task. A.3 TRPO PARAMETERS The same TRPO parameters are used in all experiments. These are provided in Table 3. A.4 MOUNTAIN CAR WITH OBSERVATIONS OF THE VERTICAL POSITION To have a better understanding of the effect of partial observability, we have investigated the following modification on MountainCarContinuous-v0: Instead of the horizontal position, the agent uses the vertical position as the observation. Hence, the observations are given by ỹt = yt +Qy ×∆yt(β1t ), (16a) ˜̇xt = ẋt +Qẋ ×∆ẋt(β2t ), (16b) where the vertical position yt ∈ [0.1, 1] is given by yt = 0.45 sin(3xt) + 0.55 (Brockman et al., 2016) and ∆yt(β1t ) ∼ U(−β1t , β1t ) and ∆ẋt(β2t ) ∼ U(−β2t , β2t ). Note that due to sin(·) function, for most of the yt values in the range [0.1, 1], there are two possible horizontal position (xt) values. Hence, this environment constitutes a POMDP even without any observation noise. Similar to our experiments with the original environment, Qy and Qẋ are set as the 0.1 times of the full range of the corresponding observation, i.e., Qx = 0.09 and Qẋ = 0.014. As before, the reward is calculated with (10) with κA = 5× 10−6. The average return due to the original task is 93, hence the agent again learns to perform the original task successfully, see Table 1 for comparison. The chosen noise levels are presented in Figure 5-6. Comparing these results with Figure 1-2 where the agent takes the horizontal position observation, we observe that the general trend of the velocity noise with respect to the velocity are the same in both settings, i.e. decreasing as the agent moves from the negative velocities to positive velocities. Comparing Figure 5 with Figure 1, we observe that lower relative noise levels are preferred for the setting with the vertical location observations. A.5 ADDITIONAL RESULTS -CART POLE We now provide the results for the cart pole environment in Figure 7-10, which were not included in the main text due to page limitations. For the sake of brevity, the noise levels over observations pairs is only provided for the position noise levels whereas averages are provided for all observation types. −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 Pole Angular Velocity 0.0 0.2 0.4 0.6 0.8 1.0 No ise L ev el Sce. A, Cart Position Noise Sce. A, Cart Velocity Noise Sce. A, Pole Angle Noise Sce. A, Pole Angular Velocity Noise (a) Scenario A, noise vs angular velocity −2 −1 0 1 2 Cart Position 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Sa m pl e Sk ip Fr eq ue nc y −2 −1 0 1 2 Cart Velocity Cart Position Cart Velocity (b) Scenario B, sampling frequency vs position and velocity −0.20 −0.15 −0.10 −0.05 0.00 0.05 0.10 0.15 0.20 Pole Angle 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Sa m pl e Sk ip Fr eq ue nc y −2 −1 0 1 2 Pole Velocity Pole Angle Pole Angular Velocity (c) Scenario B, sampling frequency vs angle and angular frequency
1. What is the main contribution of the paper regarding reinforcement learning? 2. What are the strengths and weaknesses of the proposed approach in terms of its application to partially observable MDP settings? 3. How does the reviewer assess the significance of the paper's ideas in comparison to prior works on "tuning" MDPs and learning state representations in multiagent settings? 4. What are the limitations of the selected domains in providing insights into the potential impact of the proposed approach? 5. How would considering richer domains with partial observability or richer perceptual inputs benefit the paper?
Review
Review = Overview = The paper proposes a reinforcement learning algorithm that enables an agent to "fine tune" the quality/accuracy of its sensors to its current task. The paper considers a partially observable MDP setting where the agent, besides the control actions, is endowed with a set of "tuning actions" that control the noise in the perception of the different components of the state. Additional reward terms are introduced that discourage the use of "tuning". By enabling the agent to fine tune its perception to the current task, the paper seeks to also investigate the relative importance of different state features in terms of the task. = Positive points = The paper is well written and the ideas clearly presented. The ideas seem vaguely related with recent work on "tuning" MDPs [a] and some older work on learning state representations in multiagent settings [b,c], where the agents are allowed to "pay" to have better models or perceptions. The paper proposes the use of similar ideas in a completely different context - to identify relevant information state information in POMDP settings. = Negative points = My main criticism is concerned with the particular domains considered, which I believe are too structured to provide a clear understanding of the potential impact of the proposed approach. = Comments = I believe that the problem considered in the paper is interesting and follows some recent work on "tuning" MDPs (see ref[a] below). The approach explored is quite simple but that is not an inconvenient per se. My main criticism lies in the fact that -- in my understanding -- the domains selected are too structured to provide really interesting insights. In particular, all domains considered are classical control problems with essentially deterministic dynamics and full observability. The approach in the paper injects artificial additive noise in the state as perceived by the agent (the paper only provides explicit information regarding the noise in the Mountain Car domain, but I'm assuming that is similar in the other domains). Now I may be missing something, but it seems to me that, from the agent's perspective, this is equivalent to adding noise to the dynamics of the environment, since the agent treats the observations as state. Therefore, from the agent's perspective, the practical effect of the "sensor tuning" is to actually attenuate the noise in the dynamics, which partly explains the results provided. This renders this work particularly close to those on MDP tuning referred above, and more discussion in this direction would be appreciated. I think that the paper would greatly benefit from considering richer domains, either where partial observability is a central issue -- such as those from the POMDP literature -- or with richer perceptual inputs --- such as those from game domains. = References = [a] A. Metelli, M. Mutti, M. Restelli. "Configurable Markov Decision Processes." Proc. 35th Int. Conf. Machine Learning, pp. 3491-3500, 2018. [b] F. Melo, M. Veloso. "Learning of coordination: Exploiting sparse interactions in multiagent systems." Proc. 8th Int. Conf. Autonomous Agents and Multiagent Systems, pp. 773-780, 2009. [c] Y. De Hauwere, P. Vrancx, A. Nowé. "Learning multi-agent state space representations." Proc. 9th Int. Conf. Autonomous Agents and Multiagent Systems, pp. 715-722, 2010.
ICLR
Title Learning to Observe with Reinforcement Learning Abstract We consider a decision making problem where an autonomous agent decides on which actions to take based on the observations it collects from the environment. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity) and the dependence of this on the state of agent (such as at the bottom versus top of a hill). We approach this problem by associating a cost with collecting observations which increases with the accuracy. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. By uncovering the relative usefulness of different types of observations and trade-offs within, these results also provide insights for further design of active data acquisition schemes. N/A We consider a decision making problem where an autonomous agent decides on which actions to take based on the observations it collects from the environment. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity) and the dependence of this on the state of agent (such as at the bottom versus top of a hill). We approach this problem by associating a cost with collecting observations which increases with the accuracy. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. By uncovering the relative usefulness of different types of observations and trade-offs within, these results also provide insights for further design of active data acquisition schemes. 1 INTRODUCTION Autonomous decision making relies on collecting data, i.e. observations, from the environment where the actions are decided based on the observations. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity). Revealing this structure is challenging since the usefulness of the information that an observation can bring is a priori unknown and depends on the environment as well as the current knowledge state of the decision-maker, for instance, whether the agent is at the bottom versus the top of a hill and how sure the agent is about its position. Hence, we’re interested in questions such as “Instead of collecting all available observations, is it possible to skip some observations and obtain satisfactory performance?”, “Which observation components (such as the position or the velocity) are the most useful when the object is far away from (or close to) the target state?”. The primary aim of this work is to reveal this information structure of the observation space within a systematic framework. We approach this problem by associating a cost with collecting observations which increases with the accuracy. The agent can choose the accuracy level of its observations. Since cost increases with the accuracy, we expect that the agent will choose to collect only the observations which are most likely to be informative and worth the cost. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. 2 RELATED WORK A related setting is active learning (Settles, 2010; Donmez et al., 2010) where an agent decides which queries to perform, i.e., which samples to take, during training. For instance, in an active learning set-up, an agent learning to classify images can decide which images from a large dataset it would like to have labels for in order to have improved classification performance. In a standard active learning approach (Settles, 2010; Donmez et al., 2010) as well as its extensions in RL (Lopes et al., 2009), the main aim is to reduce the size of the training set, hence the agent tries to determine informative queries during training so that the performance during the test phase is optimal. In the test phase, the agent cannot ask any questions; instead, it will answer questions, for instance, it will be given images to label. In contrast, in our setting the agent continues to perform queries during the test phase, since it still needs to collect observations during the test phase, for instance as in the case of collecting camera images for an autonomous driving application. From this perspective, one of our main aims is to reduce the number of queries the agent performs during this actual operation as opposed to number of queries in its training phase. Another related line of work consists of the RL approaches that facilitate efficient exploration of state space, such as curiosity-driven RL and intrinsic motivation (Pathak et al., 2017; Bellemare et al., 2016; Mohamed & Rezende, 2015; Still & Precup, 2012) or active-inference based methods utilizing free-energy (Ueltzhöffer, 2018; Schwöbel et al., 2018); and the works that focus on operation with limited data using a model (Chua et al., 2018; Deisenroth & Rasmussen, 2011; Henaff et al., 2018; Gal et al., 2016). In these works, the focus is either finding informative samples (Pathak et al., 2017) or using a limited number of samples/trials as much as possible by making use of a forward dynamics model (Boedecker et al., 2014; Chua et al., 2018; Deisenroth & Rasmussen, 2011; Henaff et al., 2018; Gal et al., 2016) during the agent’s training. In contrast to these approaches, we would like to decrease the effective size of the data or the number of samples taken during the test phase, i.e. operation of the agent after the training phase is over. Representation learning for control and RL constitutes another line of related work (Watter et al., 2015; Hafner et al., 2019; Banijamali et al., 2018). In these works, the transformation of the observation space to a low-dimensional space is investigated so that action selection can be performed using this low-dimensional space. Similar to these works, our framework can be also interpreted as a transformation of the original observation space where an effectively low-dimensional space is sought after. Instead of allowing a general class of transformations on the observations, here we consider a constrained setting so that only specific operations are allowed, for instance, we allow dropping some of the samples but we do not allow collecting observations and then applying arbitrary transformations on them. Our work associates a cost with obtaining observations. Cost of data acquisition in the context of Markov decision processes (MDPs) has been considered in a number of works, both as a direct cost on the observations (Hansen, 1997; Zubek & Dietterich, 2000; 2002) or as an indirect cost of information sharing in multiple agent settings (Melo & Veloso, 2009; De Hauwere et al., 2010). Another related line of work is performed under the umbrella of configurable MDPs (Metelli et al., 2018; Silva et al., 2019) where the agent can modify the dynamics of the environment. Although in our setting, it is the accuracy of the observations rather than the dynamics of the environment that the agent can modify, in some settings our work can be also interpreted as a configurable MDP. We further discuss this point in Section 4.2. 3 PROPOSED FRAMEWORK AND THE SOLUTION APPROACH 3.1 PRELIMINARIES Consider a Markov decision process given by 〈S,A,P, R, Ps0 , γ〉 where S is the state space, A is the set of actions, P : S ×A× S → R denotes the transition probabilities, R : S ×A → R denotes the bounded reward function, Ps0 : S → R denotes the probability distribution over the initial state and γ ∈ (0, 1] is the discount factor. The agent, i.e. the decision maker, observes the state of the system st at time t and decides on its action at based on its policy π(s, a). The policy mapping of the agent π(s, a) : S × A → [0, 1] is possibly stochastic and gives the probability of taking the action a at the state s. After the agent implements the action at, it receives a reward r(st, at) and the environment moves to the next state st+1 which is governed by P and depends on at and st. The aim of the RL agent is to learn an optimal policy mapping π(s, a) so that the expected return, i.e. expected cumulative discounted reward, J(π) = Eat∼π,st∼P [ ∑ t γ tr(st, at)] is maximized. 3.2 PARTIAL OBSERVABILITY Although most RL algorithms are typically expressed in terms of MDPs, in typical real-life applications the states are not directly observable, i.e., the observations only provide partial, possibly inaccurate information. For instance, consider a vehicle which uses the noisy images with limited angle-of-view obtained from cameras mounted on the vehicle for autonomous-driving decisions. In such scenarios, the data used by the agent to make decisions is not a direct representation of the state of the world. Hence, we consider a partially observable Markov decision process (POMDP) where the above MDP is augmented by O and Po where O represents the set of observations and Po : S → O represents the observation probabilities. Accordingly, the policy mapping is now expressed as π(o, a) : O ×A → [0, 1]. The observation vector at time t is given by ot = [o1t ; . . . ; o n t ] ∈ Rn, where n is the dimension of the observation vector. The observations are governed by ot ∼ po(ot|st;βt) (1) where po(ot|st;βt) denotes the conditional probability distribution function (pdf) of ot given st and is parametrized by the accuracy vector βt = [β 1 t ; . . . ;β n t ] ∈ Rn (2) The parameter βit ≥ 0 represents the average accuracy of the observation component i at time step t, i.e. oit. For instance, say we have two observations, position o 1 and velocity o2. Then, β1t denotes the accuracy of the position and β2t denotes the accuracy of the velocity. As β i t increases, the accuracy of the observation oit decreases. Given st and βt, the observations are statistically independent, i.e. we have the factorization po(ot|st;βt) = ∏ i=1,...,n poi(o i t|st;βit) (3) where poi(oit|st;βit) denotes the conditional pdf of oit given st and βit . Note that βit determines the average accuracy, i.e. the accuracy in the statistical sense. We provide an example below: Example: Consider the common Gaussian additive noise model with oit = s i t + v i t, i = 1, . . . , n, (4) where st = [s1t ; . . . ; s n t ] ∈ Rn is the state vector and vt = [v1t ; . . . ; vnt ] ∈ Rn is the Gaussian noise vector with N (0,diag(σ2 vit )). Here, vt and vt′ are statistically independent (stat. ind.) for all t 6= t′ and also vt and st′ are stat. ind. for all t, t′. Under this observation model, a reasonable choice for βit is β i t = σ 2 vit . Hence, we parametrize pio(.) as p i o(o i t|sit;βit) = N (sit, βit = σ2vit). Note that the parametrization in terms of βit can be done in multiple ways, for instance, one may also adopt βit = σvit . 3.3 DECISION MAKER CHOOSES THE ACCURACY OF THE OBSERVATIONS The agent can choose βit , hence β i t is a decision variable. Observations have a cost which increases with increasing accuracy, i.e. the cost increases with decreasing βit . • In Scenario A, the agent can vary βit on a continuous scale, i.e. βit ∈ [0,∞]. • In Scenario B, the agent chooses between i) collecting all the observations with a fixed level of accuracy or ii) not getting any of them at all. This setting corresponds to the case with βt = β̄t1, β̄t ∈ {βF ,∞}, where 1 ∈ Rn denotes the vector of ones. Here βF ≥ 0 represents a fixed accuracy level. Note that βF can be zero, corresponding to the case ot = st. Remark 3.1 Our proposed setting can be interpreted as a constrained representation learning problem for RL. In particular, consider the problem of learning the best mapping h(.) with zt = h(ōt) (5) from the high-dimensional original observations ōt to some new possibly low-dimensional variables zt so that control can be performed reliably on zt instead of ōt. Such settings have been utilized in various influential work, see for instance E2C approach of Watter et al. (2015). The proposed approach can be also formulated in a representation framework. In particular, we interpret the possibly noisy observations ot as the effectively low-dimensional representation zt used in (5). Hence, consider the mapping h̄(.) ot = h̄(ōt), (6) where ot and ōt denote the noisy and the original measurements, respectively. Compared to (5), the family of the mappings allowed in (6) is constrained, i.e. one can only adjust the accuracy parameter instead of using arbitrary transformations from ōt to ot. Here, ot is effectively lowdimensional compared to ōt because i) noise decreases the dynamic range and allows effectively higher compression rates for the data (Scenario A); or ii) the total number of observations acquired is smaller (Scenario B). Note that not all transformations from st to ot can be written using (6) as an intermediate step. From this perspective, the formulation in (1) can be said to be more general than (6). 3.3.1 MOTIVATION The primary motivation behind the proposed framework is to reveal the inherent nature of the observation space in terms of usefulness of information the observations provide with respect to the task at hand. The secondary motivation is to provide a RL framework for solving decision making problems when the observations have a cost associated with them. In regard to the first task, we note the following: To reveal this information structure, we associate an artificial cost with the observations that increase with the accuracy. Hence, only the observation components (or the observation vectors) which are mostly likely to be informative and worth the cost will be collected. This decision heavily depends on the state that the agent believes itself to be in. For instance, in the case of balancing an object at an unstable state (such as pendulum in OpenAi Gym (Brockman et al., 2016)), we intuitively expect that the agent does not need accurate measurements when it is far away from the target state. Hence, we’re interested in questions such as “Is it possible to skip some observations and obtain satisfactory performance?”, “Which observation components (such as the position or the velocity) are most useful when the object is far away from (or close to) the target state?”, “How are these results affected by the possible discrepancy between the true state the agent is in and the one that it believes it to be in due to noisy or skipped observations?”. The proposed framework reveals this information structure within a systematic setting. In regard to the second task, we note that there are many practical problems where there is a cost associated with acquiring observations (Hansen, 1997; Zubek & Dietterich, 2000; 2002), for instance consider the expensive medical tests (i.e. observations) that have to performed to diagnose a certain disease (Zubek & Dietterich, 2002) and wireless communications where there is a cost associated with channel usage (i.e. the right to use a communication channel) and a power cost that increases with the reliability of communications (Goldsmith, 2005; Cover & Thomas, 1991), see also Section A.1. The proposed framework can be used to find efficient observation strategies in such problems and to quantify the possible performance degradation due to the observation cost. Examples: The proposed scenarios A and B also correspond to practical data acquisition schemes. We now give some examples: An example for Scenario A is the case where the observations are obtained using different sensors on the device where the accuracy of each sensor can be individually adjusted. Another example is the case where the sensors are distributed over the environment and the readings of the sensors has to be relayed to central decision unit using individual compression of each observation type and wireless communications. Here, the compression and the wireless communication introduces an accuracy-cost trade-off where the agent can choose to operate at different points of. Please see Section A.1 for an example illustrating the accuracy-cost trade-off in wireless communications. An example for Scenario B is the remote control of a device, such as a drone, where all sensor readings of the device are compressed together and then sent to a decision unit. Since all readings are compressed and transmitted together, a decision of whether to transmit the whole observation vector or not has to be made, for instance due the limited power or wireless channel occupancy constraints. 3.4 REWARD SHAPING Reward shaping is a popular approach to direct RL agents towards a desired goal. Here, we want the agent not only move towards the original goal (which is encouraged by the original reward r), we also want it to learn to control βt. Hence, we propose reward shaping in the following form: r̃t = f(rt, βt) (7) where rt is the original reward, r̃t is the new modified reward and f(rt, βt) is a monotonically non-decreasing function of rt and βit , ∀i. Hence, the agent not only tries to maximize the average of the original reward but it also tries to maximize the “inaccuracy” of the measurements. This can be equivalently interpreted as minimizing the cost due to accurate measurements. In the case where there is a direct cost function ci(.) that increases with the accuracy of the observation oi (see, for instance, the example in Section A.1 where transmission power can be interpreted as the direct cost), the following additive form can be used r̃t = rt − λ n∑ i=1 ci(βit), (8) where ci(βit) is a non-increasing function of β i t and λ ≥ 0 is a weighting parameter. Hence, the agent’s aim is to maximize the original reward as well as minimize the cost of the observations. 4 EXPERIMENTS 4.1 SETTING Observation Models: We consider the following environments from the OpenAI Gym (Brockman et al., 2016): MountainCarContinuous-v0, Pendulum-v0, CartPole-v1. In this section, we illustrate how the modified environment with noisy observations is obtained for MountainCarContinuous-v0. The details and the parameter values for the other environments can be found in the Appendix A.2. We also consider a version of MountainCarContinuous-v0 with observations of the vertical position, which is presented in Section A.4. We first explain Scenario A, and then Scenario B. The original observations of the mountain car environment are the position xt and the velocity ẋt. In our framework, the agent has access to noisy versions of these original observations x̃t = xt +Qx ×∆xt(β1t ), (9a) ˜̇xt = ẋt +Qẋ ×∆ẋt(β2t ), (9b) where ∆xt(β1t ) ∼ U(−β1t , β1t ), ∆ẋt(β2t ) ∼ U(−β2t , β2t ) and U(−β, β) denotes the uniform distribution over [−β, β]. The noise variables are stat. ind., in particular ∆xt(β1t ) and ∆ẋt(β2t ) are stat. ind. from each other and also stat. ind. over time. Here, Qx and Qẋ determine the ranges of the noise levels and they are set as the 0.1 times of the full range of the corresponding observation, i.e., Qx = 0.18 and Qẋ = 0.014. Our agent chooses βit ∈ [0, 1] in addition to the original action of the environment, i.e. the force at that would be exerted on the car. The original reward of the environment per step is given by rt = −0.1× a2t . The reward is shaped using an additive model r̃t = rt + κA × ( 1 n n∑ i=1 βit ) , (10) where n = 2 and κA > 0 is chosen as 5× 10−6. The original environment has also a termination reward which the agent gets when the car passes the target position at 0.45, which is also provided to our agent upon successful termination. In Scenario B, at each time instant we either have no observation or we obtain the original observation vector, i.e. x̃t = xt and ˜̇xt = ẋt. These cases correspond to β̄t =∞ and β̄t = 0, respectively. The reward function is given as r̃t = rt + κB × g(β̄t) where κB = 0.5; and g(β̄t) = −1 for β̄t = 0, and 0 otherwise. In the implementation, we have mapped∞ to 1, i.e. the decision variable is β̄t ∈ {0, 1}, hence β̄t = 1 corresponds to not obtaining a sample in Scenario B. RL algorithm: We adopt a deep RL setting, combining reinforcement learning with deep learning using the policy-based approach Trust Region Policy Optimization (TRPO) (Schulman et al., 2015; Hill et al., 2018). The parameters are kept constant for all experiments and are provided in Appendix A.3. For Scenario A, at each time step, noisy observations obtained at that time step are fed to the algorithm as the observations. For Scenario B, the last acquired observation is fed to the algorithm as the observation at that time step. Plots: Unless otherwise stated, all results are reported as averages (such as average cumulative rewards and average βit) using 1000 episodes. For the plots, observation space is mapped to a grid with uniform intervals. Averages are taken with respect to the number of visits to each given range of the observation state. For example, for Scenario A the average of βit when x̃t ∈ [−0.1,+0.1] is shown as one average value at the center 0. For Scenario B, we report the sample skip frequency, i.e. the number of times the agent decided not to acquire a new observation when the last observed state of the agent falls into a given interval, such as the average sample skip frequency for x̃ ∈ [−0.1,+0.1] is reported as one value at 0. In all 2-D plots, the color pink indicates there was no visit to that observation state. 4.2 OVERVIEW We benchmark our results against the performance of the agent that use the original observations, and trained using the same RL algorithm. The resulting average cumulative rewards in terms of rt are presented in Table 1. We present the reward corresponding only to the original task so that we can evaluate the success of the agent in this task. These results illustrate that the agent can learn to adjust the accuracy level and still obtain successful performance. For the Mountain car environment, all agents have the same average return and for the others, the agents working with the noisy/skipped observations have a slightly weaker performance but still achieve the task of bringing/keeping the pendulum/pole in a vertical position in a reasonable number of time steps. At first sight, it may be surprising that the agent can learn to perform these tasks satisfactorily even if we have not injected any memory to our algorithm, for instance when we only use the current noisy observations for Scenario A. On the other hand, note that in these environments the observations are either noisy versions of hidden states which govern the dynamics or they are closely related to them. From the point of the agent that treats the noisy observations as state this can be interpreted as a configurable MDP (Metelli et al., 2018; Silva et al., 2019) where the agent controls the noise of the dynamics. Hence, the task of the agent can be interpreted as adjusting the noise level in the dynamics which does not necessarily require usage of memory in the decision maker. We now focus on the data collection strategies chosen by the agent for the mountain car and pendulum environments. The results for the other environments are provided in the appendix. 4.3 MOUNTAIN CAR The chosen noise levels and the sample skip frequencies for the mountain car environment are presented in Figure 1-2. Note that in Figure 1c, we present the sample skip frequency with respect to the velocity and the position on the same plot, where the legend also gives the corresponding x-axis label. In the mountain car environment, the car starts randomly around position −0.5 and it has to first go in the reverse direction (corresponding to a negative velocity) to climb the hill located around position −1.25 in order to gain momentum and climb to hill at the right (corresponding to a positive velocity) and reach the target location 0.45 which is at the top of this hill. The results reflect some of the trade-offs in this strategy: Figure 1a shows that most noisy observations in position and velocity (Scenario A) are preferred around −0.5 (where the car position is initialized), and the most accurate samples are taken when the car is around position −1.2. This is the position where the car has to make sure that it has reached to the top of the left hill so that it has enough momentum to climb the right hill. In the case of the dependence of the noise level on the velocity, Figure 1b shows that accurate samples are preferred when the velocity has high positive values. We note that this is not the only viable observation strategy and there are multiple observation strategies that give approximately the same average return in the original task. These can be explored using different Q and κ values in our framework. Figure 1c shows that approximately half of the samples are dropped in Scenario B regardless of the observation state, suggesting a high inherent sampling rate in the environment. This difference in the behaviour with the noisy and skipped observations illustrates the fundamental difference in these frameworks. In the case of noisy observations, the agent has to discover that the observations are uncertain and counteract this uncertainty. On the other hand, when taking perfect observations are possible, as in the case of Scenario B, the agent can internalize the exact environment dynamics (since mountain car environment has no inherent noise in its observations) and determine its exact state using the previous observed state and its action. Comparing Figure 2a-2b with Figure 2c, we observe that in the case of noisy observations a larger part of observation space is visited, which is partly due the fact that the plots are drawn according to the observations acquired by the agent and not the true states. Note that this does not affect the performance in the original task, as illustrated in Table 1. 4.4 PENDULUM The results for the pendulum are presented in Figure 3-4. Here, the task is to keep the pendulum at a vertical position, corresponding to an angle of 0. Figure 3a and Figure 4a show that observations with low position (i.e. angle) noise (Scenario A) are preferred when the pendulum is close to the vertical position and has relatively small angular velocity. On the other hand, when the samples can be completely skipped (Scenario B), the agent skips a large ratio of the samples in this region, as shown in Figure 3c and Figure 4c. Note that the agent spends most of the episode in this target region in the vertical position. Here, the agent prefers noiseless samples since a noisy sample may cause the control policy to choose a wild movement which might destabilize the pendulum. On the other hand, the agent may safely skip some of the samples at the upright position as the last sample is very close to current one because the angular velocity is typically low. 5 DISCUSSION AND CONCLUSIONS We have proposed a framework for revealing the information structure of the observation space in a systematic manner. We have adopted a reinforcement learning approach which utilizes a cost function which increases with the accuracy of the observations. Our results uncover the relative usefulness of different types of observations and the trade-offs within; and provide insights for further design of active data acquisition schemes for autonomous decision making. Further discussion of our results and some research directions are as follows: • Our results illustrate that settings with the inaccurate observations and skipped observations should be treated differently since the type of uncertainty that the agent has to counteract in these settings are inherently different. • Strategies for processing of the noisy/skipped observations should be investigated. Questions such as the following arise: “ Should all the processing be off-loaded to the RL agent or should the pre-processing of observations be performed, similar to Kalman filtering in the case of linear control under linear state space models (Ljung, 1999)?”, “How does the answer to the former question depend on the RL approach, the environment and the observation models?” • Our results suggest that inherent sampling rate of some of the standard RL environments may be higher than needed (for instance, see the Mountain Car environment where on average one can skip one out of every two samples without affecting the performance), indicating yet-another reason why some of these environments are seen as unchallenging for most of the state-of-art RL algorithms. • We have provided a quantification of the sensitivity of the agent’s performance to noisy/skipped observations at different observation regions illustrating that this sensitivity can be quite different based on the observation region. Utilizing this information for supporting robust designs as well as preparing adversarial examples is an interesting line of future research. A APPENDIX A.1 EXAMPLE: WIRELESS COMMUNICATIONS We now provide a motivating example to illustrate how observations can have a cost that is increasing with the accuracy and the decision maker can choose this accuracy level. A standard model for single terminal wireless communications is the additive white Gaussian noise (AWGN) channel (Goldsmith, 2005; Cover & Thomas, 1991) yt = xt + vt (11) where xt represents the channel input (i.e. message at the transmitter ) at time t, yt represents the corresponding channel output (i.e. the observation at the receiver) and the white Gaussian random process vt represents the channel noise. The capacity of this channel, i.e. the maximum number of information bits that can be sent, is determined by the signal-to-noise ratio (SNR), i.e. the average power in xt divided by the average power in vt. In particular, the capacity is given by (Goldsmith, 2005; Cover & Thomas, 1991) C = log2(1 + Px Pv ) (12) where Px and Pv are the average power levels of xt and vt, respectively. Hence, the capacity increases with Px. On the other hand, one cannot use a very high value of Px since broadcasting at high power levels is costly. In particular, Px directly contributes to the actual power required by the transmitter. Note that Px controls the accuracy of the observations. In particular, by dividing both sides by √ Px, (11) can be equivalently represented as ȳt = x̄t + v̄t (13) where ȳt , 1√Px yt, x̄t , 1√ Px xt and v̄t , 1√Px vt. The average power of x̄t is 1 and average power of v̄t is Pv/Px. The SNR, and hence, the channel capacity are the same in (11) and (13) and hence these representations are equivalent for all relevant purposes. In particular, determining Px directly determines the effective noise level. With vt Gaussian, we have vt ∼ N (0, Pv). Hence, the conditional distribution of the observations ȳt is given by p(ȳt|x̄t) = N (x̄t, Pv/Px) where Pv/Px can be chosen as βt. Hence, as the accuracy of the observations increases (Pv/Px decreases ), the cost of the observations (Px) increases. In this context, several interesting questions that relates to the accuracy of the observations and the power cost can be posed, for instance how to distribute a certain total power budget Ptotal over channels yit = x i t + v i t with different intrinsic power levels Pvi . This example illustrates the basic premise of our problem setting in a practical scenario; a decision maker who can adjust the noise levels of the observations which has a cost associated with them. It also suggests that the constraints on the wireless communications constitute a general and potential hindrance in remote control applications. Consider a device that makes the observations and takes actions but gets its commands (i.e. decisions about which actions to take) from another decision unit, such as the control of a robot or a drone by a remotely run RL algorithm which is controlling a large number of such units. Here, it is beneficial to consider policies that can work with inaccurate observations since sending accurate measurements are costly from a power perspective, which will be particularly important for a device with a limited battery, such as a drone flying at a remote location. Similarly, if the wireless communication channel cannot be used at all times, for instance, due to the limited bandwidth available, RL methods that can utilize the limited communication resources efficiently and optimize performance under such conditions are needed. A.2 ENVIRONMENT PARAMETERS In this section, we provide the parameters for all the environments in the experiments that are used directly from OpenAI Gym. We also consider a vertical position version of MountainCarContinuousv0, which is explained in Section A.4. Consider a generic environment with the observation variables oit, where o i t denotes the i th observation variable at time t. The limited-accuracy observations õtt are obtained using õit = o i t +Q i ×∆oit(βit) (14) where ∆oit ∼ U(−βt, βt). We choose Q1 = 0.1 and Q2 = 0.2 for the Pendulum-v0, Qi = 0.2 for the CartPole-v1, and Qi = 0.1 for the MountainCarContinuous-v0. The ordering of the observations is the same with the ones provided in OpenAI Gym (Brockman et al., 2016). For instance, for MountainCarContinuous-v0, position and velocity correspond to o1 and o2, respectively. Note that indices start with i = 0 in OpenAI Gym whereas here we start with i = 1. The reward function under Scenario A is given by r̃t = rt + κA × ( 1 n n∑ i=1 βit ) , (15) where rt is the original reward and κA > 0. For Scenario B, it is given by r̃t = rt + κB × g(β̄t) where g(β̄t) = −1 for β̄t = 0, and 0 otherwise. The associated κ values for different environments are presented in Table 2. The scaling factor Q’s for the noise levels and κ values for the reward function are determined empirically by first fixing Q (as a percentage of the full range of the associated observation) and searching for κ values that provide satisfactory performance in the original task. Note that the rest of the values are determined by the specifications of the environments in OpenAI Gym. The results depend on the values of Q and κ. For instance, using larger κ puts a larger weight on the reward due to noise. Hence, the agent prioritizes the reward due to noise instead of the reward from the original environment and, for large enough κ values, the agent cannot learn to perform the original task. A.3 TRPO PARAMETERS The same TRPO parameters are used in all experiments. These are provided in Table 3. A.4 MOUNTAIN CAR WITH OBSERVATIONS OF THE VERTICAL POSITION To have a better understanding of the effect of partial observability, we have investigated the following modification on MountainCarContinuous-v0: Instead of the horizontal position, the agent uses the vertical position as the observation. Hence, the observations are given by ỹt = yt +Qy ×∆yt(β1t ), (16a) ˜̇xt = ẋt +Qẋ ×∆ẋt(β2t ), (16b) where the vertical position yt ∈ [0.1, 1] is given by yt = 0.45 sin(3xt) + 0.55 (Brockman et al., 2016) and ∆yt(β1t ) ∼ U(−β1t , β1t ) and ∆ẋt(β2t ) ∼ U(−β2t , β2t ). Note that due to sin(·) function, for most of the yt values in the range [0.1, 1], there are two possible horizontal position (xt) values. Hence, this environment constitutes a POMDP even without any observation noise. Similar to our experiments with the original environment, Qy and Qẋ are set as the 0.1 times of the full range of the corresponding observation, i.e., Qx = 0.09 and Qẋ = 0.014. As before, the reward is calculated with (10) with κA = 5× 10−6. The average return due to the original task is 93, hence the agent again learns to perform the original task successfully, see Table 1 for comparison. The chosen noise levels are presented in Figure 5-6. Comparing these results with Figure 1-2 where the agent takes the horizontal position observation, we observe that the general trend of the velocity noise with respect to the velocity are the same in both settings, i.e. decreasing as the agent moves from the negative velocities to positive velocities. Comparing Figure 5 with Figure 1, we observe that lower relative noise levels are preferred for the setting with the vertical location observations. A.5 ADDITIONAL RESULTS -CART POLE We now provide the results for the cart pole environment in Figure 7-10, which were not included in the main text due to page limitations. For the sake of brevity, the noise levels over observations pairs is only provided for the position noise levels whereas averages are provided for all observation types. −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 Pole Angular Velocity 0.0 0.2 0.4 0.6 0.8 1.0 No ise L ev el Sce. A, Cart Position Noise Sce. A, Cart Velocity Noise Sce. A, Pole Angle Noise Sce. A, Pole Angular Velocity Noise (a) Scenario A, noise vs angular velocity −2 −1 0 1 2 Cart Position 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Sa m pl e Sk ip Fr eq ue nc y −2 −1 0 1 2 Cart Velocity Cart Position Cart Velocity (b) Scenario B, sampling frequency vs position and velocity −0.20 −0.15 −0.10 −0.05 0.00 0.05 0.10 0.15 0.20 Pole Angle 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Sa m pl e Sk ip Fr eq ue nc y −2 −1 0 1 2 Pole Velocity Pole Angle Pole Angular Velocity (c) Scenario B, sampling frequency vs angle and angular frequency
1. What are the research questions addressed in the paper? 2. What is the novel aspect of the proposed variation compared to standard reinforcement learning? 3. How did the authors model the problem, and what was the solution approach used? 4. What are the two scenarios studied in the paper, and what do they represent? 5. What are the concerns regarding the experiment conclusions, specifically related to the values chosen in Equations (9a-b) and (10)? 6. How did the authors handle partial observability in TRPO, and what policy was used? 7. Are there any suggestions for improving the paper, such as proofreading or providing more details on certain aspects?
Review
Review In contrast to standard reinforcement learning (RL), the paper investigates the variant where the observation made by the agent about its state has a cost. The authors propose to model the problem as a POMDP with an augmented action space (normal action + observation accuracy) and a new reward function that is defined as the original one penalized by the observation cost. They solve the problem with TRPO in three control domains: mountain car, pendulum, and cart pole. PROS I find the research questions asked in the paper interesting. The proposed variation seems to be novel as far as I know. Besides, two scenarios are studied in the paper, which correspond to two extreme cases: continuous vs discrete accuracy. CONS The conclusions of the experiments seems to depend on the specific values set notably in Equations (9a-b) and (10). I think a discussion is warranted about how they were chosen. Notably, can the same conclusions be drawn if those values are changed? I didn't find the information about the policy used in TRPO. Notably, how does it deal with the partial observability? The paper should be proof-read.
ICLR
Title Learning to Observe with Reinforcement Learning Abstract We consider a decision making problem where an autonomous agent decides on which actions to take based on the observations it collects from the environment. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity) and the dependence of this on the state of agent (such as at the bottom versus top of a hill). We approach this problem by associating a cost with collecting observations which increases with the accuracy. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. By uncovering the relative usefulness of different types of observations and trade-offs within, these results also provide insights for further design of active data acquisition schemes. N/A We consider a decision making problem where an autonomous agent decides on which actions to take based on the observations it collects from the environment. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity) and the dependence of this on the state of agent (such as at the bottom versus top of a hill). We approach this problem by associating a cost with collecting observations which increases with the accuracy. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. By uncovering the relative usefulness of different types of observations and trade-offs within, these results also provide insights for further design of active data acquisition schemes. 1 INTRODUCTION Autonomous decision making relies on collecting data, i.e. observations, from the environment where the actions are decided based on the observations. We are interested in revealing the information structure of the observation space illustrating which type of observations are the most important (such as position versus velocity). Revealing this structure is challenging since the usefulness of the information that an observation can bring is a priori unknown and depends on the environment as well as the current knowledge state of the decision-maker, for instance, whether the agent is at the bottom versus the top of a hill and how sure the agent is about its position. Hence, we’re interested in questions such as “Instead of collecting all available observations, is it possible to skip some observations and obtain satisfactory performance?”, “Which observation components (such as the position or the velocity) are the most useful when the object is far away from (or close to) the target state?”. The primary aim of this work is to reveal this information structure of the observation space within a systematic framework. We approach this problem by associating a cost with collecting observations which increases with the accuracy. The agent can choose the accuracy level of its observations. Since cost increases with the accuracy, we expect that the agent will choose to collect only the observations which are most likely to be informative and worth the cost. We adopt a reinforcement learning (RL) framework where the RL agent learns to adjust the accuracy of the observations alongside learning to perform the original task. We consider both the scenario where the accuracy can be adjusted continuously and also the scenario where the agent has to choose between given preset levels, such as taking a sample perfectly or not taking a sample at all. In contrast to the existing work that mostly focuses on sample efficiency during training, our focus is on the behaviour during the actual task. Our results illustrate that the RL agent can learn to use the observation space efficiently and obtain satisfactory performance in the original task while collecting effectively smaller amount of data. 2 RELATED WORK A related setting is active learning (Settles, 2010; Donmez et al., 2010) where an agent decides which queries to perform, i.e., which samples to take, during training. For instance, in an active learning set-up, an agent learning to classify images can decide which images from a large dataset it would like to have labels for in order to have improved classification performance. In a standard active learning approach (Settles, 2010; Donmez et al., 2010) as well as its extensions in RL (Lopes et al., 2009), the main aim is to reduce the size of the training set, hence the agent tries to determine informative queries during training so that the performance during the test phase is optimal. In the test phase, the agent cannot ask any questions; instead, it will answer questions, for instance, it will be given images to label. In contrast, in our setting the agent continues to perform queries during the test phase, since it still needs to collect observations during the test phase, for instance as in the case of collecting camera images for an autonomous driving application. From this perspective, one of our main aims is to reduce the number of queries the agent performs during this actual operation as opposed to number of queries in its training phase. Another related line of work consists of the RL approaches that facilitate efficient exploration of state space, such as curiosity-driven RL and intrinsic motivation (Pathak et al., 2017; Bellemare et al., 2016; Mohamed & Rezende, 2015; Still & Precup, 2012) or active-inference based methods utilizing free-energy (Ueltzhöffer, 2018; Schwöbel et al., 2018); and the works that focus on operation with limited data using a model (Chua et al., 2018; Deisenroth & Rasmussen, 2011; Henaff et al., 2018; Gal et al., 2016). In these works, the focus is either finding informative samples (Pathak et al., 2017) or using a limited number of samples/trials as much as possible by making use of a forward dynamics model (Boedecker et al., 2014; Chua et al., 2018; Deisenroth & Rasmussen, 2011; Henaff et al., 2018; Gal et al., 2016) during the agent’s training. In contrast to these approaches, we would like to decrease the effective size of the data or the number of samples taken during the test phase, i.e. operation of the agent after the training phase is over. Representation learning for control and RL constitutes another line of related work (Watter et al., 2015; Hafner et al., 2019; Banijamali et al., 2018). In these works, the transformation of the observation space to a low-dimensional space is investigated so that action selection can be performed using this low-dimensional space. Similar to these works, our framework can be also interpreted as a transformation of the original observation space where an effectively low-dimensional space is sought after. Instead of allowing a general class of transformations on the observations, here we consider a constrained setting so that only specific operations are allowed, for instance, we allow dropping some of the samples but we do not allow collecting observations and then applying arbitrary transformations on them. Our work associates a cost with obtaining observations. Cost of data acquisition in the context of Markov decision processes (MDPs) has been considered in a number of works, both as a direct cost on the observations (Hansen, 1997; Zubek & Dietterich, 2000; 2002) or as an indirect cost of information sharing in multiple agent settings (Melo & Veloso, 2009; De Hauwere et al., 2010). Another related line of work is performed under the umbrella of configurable MDPs (Metelli et al., 2018; Silva et al., 2019) where the agent can modify the dynamics of the environment. Although in our setting, it is the accuracy of the observations rather than the dynamics of the environment that the agent can modify, in some settings our work can be also interpreted as a configurable MDP. We further discuss this point in Section 4.2. 3 PROPOSED FRAMEWORK AND THE SOLUTION APPROACH 3.1 PRELIMINARIES Consider a Markov decision process given by 〈S,A,P, R, Ps0 , γ〉 where S is the state space, A is the set of actions, P : S ×A× S → R denotes the transition probabilities, R : S ×A → R denotes the bounded reward function, Ps0 : S → R denotes the probability distribution over the initial state and γ ∈ (0, 1] is the discount factor. The agent, i.e. the decision maker, observes the state of the system st at time t and decides on its action at based on its policy π(s, a). The policy mapping of the agent π(s, a) : S × A → [0, 1] is possibly stochastic and gives the probability of taking the action a at the state s. After the agent implements the action at, it receives a reward r(st, at) and the environment moves to the next state st+1 which is governed by P and depends on at and st. The aim of the RL agent is to learn an optimal policy mapping π(s, a) so that the expected return, i.e. expected cumulative discounted reward, J(π) = Eat∼π,st∼P [ ∑ t γ tr(st, at)] is maximized. 3.2 PARTIAL OBSERVABILITY Although most RL algorithms are typically expressed in terms of MDPs, in typical real-life applications the states are not directly observable, i.e., the observations only provide partial, possibly inaccurate information. For instance, consider a vehicle which uses the noisy images with limited angle-of-view obtained from cameras mounted on the vehicle for autonomous-driving decisions. In such scenarios, the data used by the agent to make decisions is not a direct representation of the state of the world. Hence, we consider a partially observable Markov decision process (POMDP) where the above MDP is augmented by O and Po where O represents the set of observations and Po : S → O represents the observation probabilities. Accordingly, the policy mapping is now expressed as π(o, a) : O ×A → [0, 1]. The observation vector at time t is given by ot = [o1t ; . . . ; o n t ] ∈ Rn, where n is the dimension of the observation vector. The observations are governed by ot ∼ po(ot|st;βt) (1) where po(ot|st;βt) denotes the conditional probability distribution function (pdf) of ot given st and is parametrized by the accuracy vector βt = [β 1 t ; . . . ;β n t ] ∈ Rn (2) The parameter βit ≥ 0 represents the average accuracy of the observation component i at time step t, i.e. oit. For instance, say we have two observations, position o 1 and velocity o2. Then, β1t denotes the accuracy of the position and β2t denotes the accuracy of the velocity. As β i t increases, the accuracy of the observation oit decreases. Given st and βt, the observations are statistically independent, i.e. we have the factorization po(ot|st;βt) = ∏ i=1,...,n poi(o i t|st;βit) (3) where poi(oit|st;βit) denotes the conditional pdf of oit given st and βit . Note that βit determines the average accuracy, i.e. the accuracy in the statistical sense. We provide an example below: Example: Consider the common Gaussian additive noise model with oit = s i t + v i t, i = 1, . . . , n, (4) where st = [s1t ; . . . ; s n t ] ∈ Rn is the state vector and vt = [v1t ; . . . ; vnt ] ∈ Rn is the Gaussian noise vector with N (0,diag(σ2 vit )). Here, vt and vt′ are statistically independent (stat. ind.) for all t 6= t′ and also vt and st′ are stat. ind. for all t, t′. Under this observation model, a reasonable choice for βit is β i t = σ 2 vit . Hence, we parametrize pio(.) as p i o(o i t|sit;βit) = N (sit, βit = σ2vit). Note that the parametrization in terms of βit can be done in multiple ways, for instance, one may also adopt βit = σvit . 3.3 DECISION MAKER CHOOSES THE ACCURACY OF THE OBSERVATIONS The agent can choose βit , hence β i t is a decision variable. Observations have a cost which increases with increasing accuracy, i.e. the cost increases with decreasing βit . • In Scenario A, the agent can vary βit on a continuous scale, i.e. βit ∈ [0,∞]. • In Scenario B, the agent chooses between i) collecting all the observations with a fixed level of accuracy or ii) not getting any of them at all. This setting corresponds to the case with βt = β̄t1, β̄t ∈ {βF ,∞}, where 1 ∈ Rn denotes the vector of ones. Here βF ≥ 0 represents a fixed accuracy level. Note that βF can be zero, corresponding to the case ot = st. Remark 3.1 Our proposed setting can be interpreted as a constrained representation learning problem for RL. In particular, consider the problem of learning the best mapping h(.) with zt = h(ōt) (5) from the high-dimensional original observations ōt to some new possibly low-dimensional variables zt so that control can be performed reliably on zt instead of ōt. Such settings have been utilized in various influential work, see for instance E2C approach of Watter et al. (2015). The proposed approach can be also formulated in a representation framework. In particular, we interpret the possibly noisy observations ot as the effectively low-dimensional representation zt used in (5). Hence, consider the mapping h̄(.) ot = h̄(ōt), (6) where ot and ōt denote the noisy and the original measurements, respectively. Compared to (5), the family of the mappings allowed in (6) is constrained, i.e. one can only adjust the accuracy parameter instead of using arbitrary transformations from ōt to ot. Here, ot is effectively lowdimensional compared to ōt because i) noise decreases the dynamic range and allows effectively higher compression rates for the data (Scenario A); or ii) the total number of observations acquired is smaller (Scenario B). Note that not all transformations from st to ot can be written using (6) as an intermediate step. From this perspective, the formulation in (1) can be said to be more general than (6). 3.3.1 MOTIVATION The primary motivation behind the proposed framework is to reveal the inherent nature of the observation space in terms of usefulness of information the observations provide with respect to the task at hand. The secondary motivation is to provide a RL framework for solving decision making problems when the observations have a cost associated with them. In regard to the first task, we note the following: To reveal this information structure, we associate an artificial cost with the observations that increase with the accuracy. Hence, only the observation components (or the observation vectors) which are mostly likely to be informative and worth the cost will be collected. This decision heavily depends on the state that the agent believes itself to be in. For instance, in the case of balancing an object at an unstable state (such as pendulum in OpenAi Gym (Brockman et al., 2016)), we intuitively expect that the agent does not need accurate measurements when it is far away from the target state. Hence, we’re interested in questions such as “Is it possible to skip some observations and obtain satisfactory performance?”, “Which observation components (such as the position or the velocity) are most useful when the object is far away from (or close to) the target state?”, “How are these results affected by the possible discrepancy between the true state the agent is in and the one that it believes it to be in due to noisy or skipped observations?”. The proposed framework reveals this information structure within a systematic setting. In regard to the second task, we note that there are many practical problems where there is a cost associated with acquiring observations (Hansen, 1997; Zubek & Dietterich, 2000; 2002), for instance consider the expensive medical tests (i.e. observations) that have to performed to diagnose a certain disease (Zubek & Dietterich, 2002) and wireless communications where there is a cost associated with channel usage (i.e. the right to use a communication channel) and a power cost that increases with the reliability of communications (Goldsmith, 2005; Cover & Thomas, 1991), see also Section A.1. The proposed framework can be used to find efficient observation strategies in such problems and to quantify the possible performance degradation due to the observation cost. Examples: The proposed scenarios A and B also correspond to practical data acquisition schemes. We now give some examples: An example for Scenario A is the case where the observations are obtained using different sensors on the device where the accuracy of each sensor can be individually adjusted. Another example is the case where the sensors are distributed over the environment and the readings of the sensors has to be relayed to central decision unit using individual compression of each observation type and wireless communications. Here, the compression and the wireless communication introduces an accuracy-cost trade-off where the agent can choose to operate at different points of. Please see Section A.1 for an example illustrating the accuracy-cost trade-off in wireless communications. An example for Scenario B is the remote control of a device, such as a drone, where all sensor readings of the device are compressed together and then sent to a decision unit. Since all readings are compressed and transmitted together, a decision of whether to transmit the whole observation vector or not has to be made, for instance due the limited power or wireless channel occupancy constraints. 3.4 REWARD SHAPING Reward shaping is a popular approach to direct RL agents towards a desired goal. Here, we want the agent not only move towards the original goal (which is encouraged by the original reward r), we also want it to learn to control βt. Hence, we propose reward shaping in the following form: r̃t = f(rt, βt) (7) where rt is the original reward, r̃t is the new modified reward and f(rt, βt) is a monotonically non-decreasing function of rt and βit , ∀i. Hence, the agent not only tries to maximize the average of the original reward but it also tries to maximize the “inaccuracy” of the measurements. This can be equivalently interpreted as minimizing the cost due to accurate measurements. In the case where there is a direct cost function ci(.) that increases with the accuracy of the observation oi (see, for instance, the example in Section A.1 where transmission power can be interpreted as the direct cost), the following additive form can be used r̃t = rt − λ n∑ i=1 ci(βit), (8) where ci(βit) is a non-increasing function of β i t and λ ≥ 0 is a weighting parameter. Hence, the agent’s aim is to maximize the original reward as well as minimize the cost of the observations. 4 EXPERIMENTS 4.1 SETTING Observation Models: We consider the following environments from the OpenAI Gym (Brockman et al., 2016): MountainCarContinuous-v0, Pendulum-v0, CartPole-v1. In this section, we illustrate how the modified environment with noisy observations is obtained for MountainCarContinuous-v0. The details and the parameter values for the other environments can be found in the Appendix A.2. We also consider a version of MountainCarContinuous-v0 with observations of the vertical position, which is presented in Section A.4. We first explain Scenario A, and then Scenario B. The original observations of the mountain car environment are the position xt and the velocity ẋt. In our framework, the agent has access to noisy versions of these original observations x̃t = xt +Qx ×∆xt(β1t ), (9a) ˜̇xt = ẋt +Qẋ ×∆ẋt(β2t ), (9b) where ∆xt(β1t ) ∼ U(−β1t , β1t ), ∆ẋt(β2t ) ∼ U(−β2t , β2t ) and U(−β, β) denotes the uniform distribution over [−β, β]. The noise variables are stat. ind., in particular ∆xt(β1t ) and ∆ẋt(β2t ) are stat. ind. from each other and also stat. ind. over time. Here, Qx and Qẋ determine the ranges of the noise levels and they are set as the 0.1 times of the full range of the corresponding observation, i.e., Qx = 0.18 and Qẋ = 0.014. Our agent chooses βit ∈ [0, 1] in addition to the original action of the environment, i.e. the force at that would be exerted on the car. The original reward of the environment per step is given by rt = −0.1× a2t . The reward is shaped using an additive model r̃t = rt + κA × ( 1 n n∑ i=1 βit ) , (10) where n = 2 and κA > 0 is chosen as 5× 10−6. The original environment has also a termination reward which the agent gets when the car passes the target position at 0.45, which is also provided to our agent upon successful termination. In Scenario B, at each time instant we either have no observation or we obtain the original observation vector, i.e. x̃t = xt and ˜̇xt = ẋt. These cases correspond to β̄t =∞ and β̄t = 0, respectively. The reward function is given as r̃t = rt + κB × g(β̄t) where κB = 0.5; and g(β̄t) = −1 for β̄t = 0, and 0 otherwise. In the implementation, we have mapped∞ to 1, i.e. the decision variable is β̄t ∈ {0, 1}, hence β̄t = 1 corresponds to not obtaining a sample in Scenario B. RL algorithm: We adopt a deep RL setting, combining reinforcement learning with deep learning using the policy-based approach Trust Region Policy Optimization (TRPO) (Schulman et al., 2015; Hill et al., 2018). The parameters are kept constant for all experiments and are provided in Appendix A.3. For Scenario A, at each time step, noisy observations obtained at that time step are fed to the algorithm as the observations. For Scenario B, the last acquired observation is fed to the algorithm as the observation at that time step. Plots: Unless otherwise stated, all results are reported as averages (such as average cumulative rewards and average βit) using 1000 episodes. For the plots, observation space is mapped to a grid with uniform intervals. Averages are taken with respect to the number of visits to each given range of the observation state. For example, for Scenario A the average of βit when x̃t ∈ [−0.1,+0.1] is shown as one average value at the center 0. For Scenario B, we report the sample skip frequency, i.e. the number of times the agent decided not to acquire a new observation when the last observed state of the agent falls into a given interval, such as the average sample skip frequency for x̃ ∈ [−0.1,+0.1] is reported as one value at 0. In all 2-D plots, the color pink indicates there was no visit to that observation state. 4.2 OVERVIEW We benchmark our results against the performance of the agent that use the original observations, and trained using the same RL algorithm. The resulting average cumulative rewards in terms of rt are presented in Table 1. We present the reward corresponding only to the original task so that we can evaluate the success of the agent in this task. These results illustrate that the agent can learn to adjust the accuracy level and still obtain successful performance. For the Mountain car environment, all agents have the same average return and for the others, the agents working with the noisy/skipped observations have a slightly weaker performance but still achieve the task of bringing/keeping the pendulum/pole in a vertical position in a reasonable number of time steps. At first sight, it may be surprising that the agent can learn to perform these tasks satisfactorily even if we have not injected any memory to our algorithm, for instance when we only use the current noisy observations for Scenario A. On the other hand, note that in these environments the observations are either noisy versions of hidden states which govern the dynamics or they are closely related to them. From the point of the agent that treats the noisy observations as state this can be interpreted as a configurable MDP (Metelli et al., 2018; Silva et al., 2019) where the agent controls the noise of the dynamics. Hence, the task of the agent can be interpreted as adjusting the noise level in the dynamics which does not necessarily require usage of memory in the decision maker. We now focus on the data collection strategies chosen by the agent for the mountain car and pendulum environments. The results for the other environments are provided in the appendix. 4.3 MOUNTAIN CAR The chosen noise levels and the sample skip frequencies for the mountain car environment are presented in Figure 1-2. Note that in Figure 1c, we present the sample skip frequency with respect to the velocity and the position on the same plot, where the legend also gives the corresponding x-axis label. In the mountain car environment, the car starts randomly around position −0.5 and it has to first go in the reverse direction (corresponding to a negative velocity) to climb the hill located around position −1.25 in order to gain momentum and climb to hill at the right (corresponding to a positive velocity) and reach the target location 0.45 which is at the top of this hill. The results reflect some of the trade-offs in this strategy: Figure 1a shows that most noisy observations in position and velocity (Scenario A) are preferred around −0.5 (where the car position is initialized), and the most accurate samples are taken when the car is around position −1.2. This is the position where the car has to make sure that it has reached to the top of the left hill so that it has enough momentum to climb the right hill. In the case of the dependence of the noise level on the velocity, Figure 1b shows that accurate samples are preferred when the velocity has high positive values. We note that this is not the only viable observation strategy and there are multiple observation strategies that give approximately the same average return in the original task. These can be explored using different Q and κ values in our framework. Figure 1c shows that approximately half of the samples are dropped in Scenario B regardless of the observation state, suggesting a high inherent sampling rate in the environment. This difference in the behaviour with the noisy and skipped observations illustrates the fundamental difference in these frameworks. In the case of noisy observations, the agent has to discover that the observations are uncertain and counteract this uncertainty. On the other hand, when taking perfect observations are possible, as in the case of Scenario B, the agent can internalize the exact environment dynamics (since mountain car environment has no inherent noise in its observations) and determine its exact state using the previous observed state and its action. Comparing Figure 2a-2b with Figure 2c, we observe that in the case of noisy observations a larger part of observation space is visited, which is partly due the fact that the plots are drawn according to the observations acquired by the agent and not the true states. Note that this does not affect the performance in the original task, as illustrated in Table 1. 4.4 PENDULUM The results for the pendulum are presented in Figure 3-4. Here, the task is to keep the pendulum at a vertical position, corresponding to an angle of 0. Figure 3a and Figure 4a show that observations with low position (i.e. angle) noise (Scenario A) are preferred when the pendulum is close to the vertical position and has relatively small angular velocity. On the other hand, when the samples can be completely skipped (Scenario B), the agent skips a large ratio of the samples in this region, as shown in Figure 3c and Figure 4c. Note that the agent spends most of the episode in this target region in the vertical position. Here, the agent prefers noiseless samples since a noisy sample may cause the control policy to choose a wild movement which might destabilize the pendulum. On the other hand, the agent may safely skip some of the samples at the upright position as the last sample is very close to current one because the angular velocity is typically low. 5 DISCUSSION AND CONCLUSIONS We have proposed a framework for revealing the information structure of the observation space in a systematic manner. We have adopted a reinforcement learning approach which utilizes a cost function which increases with the accuracy of the observations. Our results uncover the relative usefulness of different types of observations and the trade-offs within; and provide insights for further design of active data acquisition schemes for autonomous decision making. Further discussion of our results and some research directions are as follows: • Our results illustrate that settings with the inaccurate observations and skipped observations should be treated differently since the type of uncertainty that the agent has to counteract in these settings are inherently different. • Strategies for processing of the noisy/skipped observations should be investigated. Questions such as the following arise: “ Should all the processing be off-loaded to the RL agent or should the pre-processing of observations be performed, similar to Kalman filtering in the case of linear control under linear state space models (Ljung, 1999)?”, “How does the answer to the former question depend on the RL approach, the environment and the observation models?” • Our results suggest that inherent sampling rate of some of the standard RL environments may be higher than needed (for instance, see the Mountain Car environment where on average one can skip one out of every two samples without affecting the performance), indicating yet-another reason why some of these environments are seen as unchallenging for most of the state-of-art RL algorithms. • We have provided a quantification of the sensitivity of the agent’s performance to noisy/skipped observations at different observation regions illustrating that this sensitivity can be quite different based on the observation region. Utilizing this information for supporting robust designs as well as preparing adversarial examples is an interesting line of future research. A APPENDIX A.1 EXAMPLE: WIRELESS COMMUNICATIONS We now provide a motivating example to illustrate how observations can have a cost that is increasing with the accuracy and the decision maker can choose this accuracy level. A standard model for single terminal wireless communications is the additive white Gaussian noise (AWGN) channel (Goldsmith, 2005; Cover & Thomas, 1991) yt = xt + vt (11) where xt represents the channel input (i.e. message at the transmitter ) at time t, yt represents the corresponding channel output (i.e. the observation at the receiver) and the white Gaussian random process vt represents the channel noise. The capacity of this channel, i.e. the maximum number of information bits that can be sent, is determined by the signal-to-noise ratio (SNR), i.e. the average power in xt divided by the average power in vt. In particular, the capacity is given by (Goldsmith, 2005; Cover & Thomas, 1991) C = log2(1 + Px Pv ) (12) where Px and Pv are the average power levels of xt and vt, respectively. Hence, the capacity increases with Px. On the other hand, one cannot use a very high value of Px since broadcasting at high power levels is costly. In particular, Px directly contributes to the actual power required by the transmitter. Note that Px controls the accuracy of the observations. In particular, by dividing both sides by √ Px, (11) can be equivalently represented as ȳt = x̄t + v̄t (13) where ȳt , 1√Px yt, x̄t , 1√ Px xt and v̄t , 1√Px vt. The average power of x̄t is 1 and average power of v̄t is Pv/Px. The SNR, and hence, the channel capacity are the same in (11) and (13) and hence these representations are equivalent for all relevant purposes. In particular, determining Px directly determines the effective noise level. With vt Gaussian, we have vt ∼ N (0, Pv). Hence, the conditional distribution of the observations ȳt is given by p(ȳt|x̄t) = N (x̄t, Pv/Px) where Pv/Px can be chosen as βt. Hence, as the accuracy of the observations increases (Pv/Px decreases ), the cost of the observations (Px) increases. In this context, several interesting questions that relates to the accuracy of the observations and the power cost can be posed, for instance how to distribute a certain total power budget Ptotal over channels yit = x i t + v i t with different intrinsic power levels Pvi . This example illustrates the basic premise of our problem setting in a practical scenario; a decision maker who can adjust the noise levels of the observations which has a cost associated with them. It also suggests that the constraints on the wireless communications constitute a general and potential hindrance in remote control applications. Consider a device that makes the observations and takes actions but gets its commands (i.e. decisions about which actions to take) from another decision unit, such as the control of a robot or a drone by a remotely run RL algorithm which is controlling a large number of such units. Here, it is beneficial to consider policies that can work with inaccurate observations since sending accurate measurements are costly from a power perspective, which will be particularly important for a device with a limited battery, such as a drone flying at a remote location. Similarly, if the wireless communication channel cannot be used at all times, for instance, due to the limited bandwidth available, RL methods that can utilize the limited communication resources efficiently and optimize performance under such conditions are needed. A.2 ENVIRONMENT PARAMETERS In this section, we provide the parameters for all the environments in the experiments that are used directly from OpenAI Gym. We also consider a vertical position version of MountainCarContinuousv0, which is explained in Section A.4. Consider a generic environment with the observation variables oit, where o i t denotes the i th observation variable at time t. The limited-accuracy observations õtt are obtained using õit = o i t +Q i ×∆oit(βit) (14) where ∆oit ∼ U(−βt, βt). We choose Q1 = 0.1 and Q2 = 0.2 for the Pendulum-v0, Qi = 0.2 for the CartPole-v1, and Qi = 0.1 for the MountainCarContinuous-v0. The ordering of the observations is the same with the ones provided in OpenAI Gym (Brockman et al., 2016). For instance, for MountainCarContinuous-v0, position and velocity correspond to o1 and o2, respectively. Note that indices start with i = 0 in OpenAI Gym whereas here we start with i = 1. The reward function under Scenario A is given by r̃t = rt + κA × ( 1 n n∑ i=1 βit ) , (15) where rt is the original reward and κA > 0. For Scenario B, it is given by r̃t = rt + κB × g(β̄t) where g(β̄t) = −1 for β̄t = 0, and 0 otherwise. The associated κ values for different environments are presented in Table 2. The scaling factor Q’s for the noise levels and κ values for the reward function are determined empirically by first fixing Q (as a percentage of the full range of the associated observation) and searching for κ values that provide satisfactory performance in the original task. Note that the rest of the values are determined by the specifications of the environments in OpenAI Gym. The results depend on the values of Q and κ. For instance, using larger κ puts a larger weight on the reward due to noise. Hence, the agent prioritizes the reward due to noise instead of the reward from the original environment and, for large enough κ values, the agent cannot learn to perform the original task. A.3 TRPO PARAMETERS The same TRPO parameters are used in all experiments. These are provided in Table 3. A.4 MOUNTAIN CAR WITH OBSERVATIONS OF THE VERTICAL POSITION To have a better understanding of the effect of partial observability, we have investigated the following modification on MountainCarContinuous-v0: Instead of the horizontal position, the agent uses the vertical position as the observation. Hence, the observations are given by ỹt = yt +Qy ×∆yt(β1t ), (16a) ˜̇xt = ẋt +Qẋ ×∆ẋt(β2t ), (16b) where the vertical position yt ∈ [0.1, 1] is given by yt = 0.45 sin(3xt) + 0.55 (Brockman et al., 2016) and ∆yt(β1t ) ∼ U(−β1t , β1t ) and ∆ẋt(β2t ) ∼ U(−β2t , β2t ). Note that due to sin(·) function, for most of the yt values in the range [0.1, 1], there are two possible horizontal position (xt) values. Hence, this environment constitutes a POMDP even without any observation noise. Similar to our experiments with the original environment, Qy and Qẋ are set as the 0.1 times of the full range of the corresponding observation, i.e., Qx = 0.09 and Qẋ = 0.014. As before, the reward is calculated with (10) with κA = 5× 10−6. The average return due to the original task is 93, hence the agent again learns to perform the original task successfully, see Table 1 for comparison. The chosen noise levels are presented in Figure 5-6. Comparing these results with Figure 1-2 where the agent takes the horizontal position observation, we observe that the general trend of the velocity noise with respect to the velocity are the same in both settings, i.e. decreasing as the agent moves from the negative velocities to positive velocities. Comparing Figure 5 with Figure 1, we observe that lower relative noise levels are preferred for the setting with the vertical location observations. A.5 ADDITIONAL RESULTS -CART POLE We now provide the results for the cart pole environment in Figure 7-10, which were not included in the main text due to page limitations. For the sake of brevity, the noise levels over observations pairs is only provided for the position noise levels whereas averages are provided for all observation types. −1.5 −1.0 −0.5 0.0 0.5 1.0 1.5 Pole Angular Velocity 0.0 0.2 0.4 0.6 0.8 1.0 No ise L ev el Sce. A, Cart Position Noise Sce. A, Cart Velocity Noise Sce. A, Pole Angle Noise Sce. A, Pole Angular Velocity Noise (a) Scenario A, noise vs angular velocity −2 −1 0 1 2 Cart Position 0.30 0.35 0.40 0.45 0.50 0.55 0.60 Sa m pl e Sk ip Fr eq ue nc y −2 −1 0 1 2 Cart Velocity Cart Position Cart Velocity (b) Scenario B, sampling frequency vs position and velocity −0.20 −0.15 −0.10 −0.05 0.00 0.05 0.10 0.15 0.20 Pole Angle 0.0 0.1 0.2 0.3 0.4 0.5 0.6 Sa m pl e Sk ip Fr eq ue nc y −2 −1 0 1 2 Pole Velocity Pole Angle Pole Angular Velocity (c) Scenario B, sampling frequency vs angle and angular frequency
1. What is the main contribution of the paper regarding sample collection in deep reinforcement learning? 2. What are the strengths and weaknesses of the proposed method, particularly in its connection to intrinsic motivation? 3. How does the reviewer assess the experimental section, particularly regarding the choice of problems and comparisons with other works? 4. What suggestions does the reviewer provide for improving the paper, including the addition of deep RL problems and comparisons with representative baselines? 5. How does the reviewer's feedback change after the author's response, and what is their final decision regarding the paper's acceptance?
Review
Review This paper aims at studying an optimized way of collecting samples from an environment, discarding the ones for which the accuracy of the observation is high. This way the agent focuses on collecting only the samples that improve the knowledge of the state space. This paper could be presented better, as the motivations of the work and the description of the method lack clarity and effectiveness. First, the title is somewhat misleading, as we cannot say that the agent is "learning to observe", which can remind something more related to feature extraction in representation learning. Indeed, the agent is learning to explore states under a certain criterion, i.e. minimizing the accuracy of the observation, closely reminding all the literature about intrinsically motivated exploration, that in this paper is only cited in the related works. After all, it looks to me that this paper is exclusively proposing a form of intrinsic reward, but it fails to explain it thoroughly. In particular, only a small subsection, namely 3.4, is dedicated to this description, moreover referring to "reward shaping", which is not the same concept as intrinsic motivation. The experimental section is weak, as it only analyses two simple RL problems, more problematically not comparing with any method in the literature. Pros The paper addresses an interesting problem that can potentially improve sample-efficiency in deep RL problems. Cons Poor description of the methodology, in particular explaining its connection with intrinsic motivation; No deep RL problems considered; No comparisons with methods in literature. I recommend the authors to substantially restructure the paper to include a better analysis of how their method compares with intrinsic motivation, include deep RL problems where the problem of exploration and accuracy of observations is more accentuated, and add comparisons with representative baselines, e.g. Pathak et al (2017), Bellemare et al (2016), etc.. Post-rebuttal feedback I thank the authors for their reply. In contrast, our paper focuses on the following question: “how can we reduce the number/accuracy of the samples the agent takes during the test phase”? (Here, the test phase corresponds to the agent’s behaviour after the training is completed.) I agree with the authors that intrinsic motivation is different, and perhaps in my review I expressed this concern too strongly. So I thank the authors for their long and informative answer. We believe that the reviewer refers to the problems where a possibly large multi-dimensional data such as images in games are used as input to the RL algorithm. Exactly. Experiments on high-dimensional problems would make the contribution of this paper stronger, considering the rather limited theoretical/methodological impact that it has now. I strongly suggest the authors to work in this direction, perhaps on robotic application if possible. After the rebuttal, I still argue for rejection, although I increase my score from 3 to 4.
ICLR
Title Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories Abstract Natural agents can effectively learn from multiple data sources that differ in size, quality, and types of measurements. We study this heterogeneity in the context of offline reinforcement learning (RL) by introducing a new, practically motivated semi-supervised setting. Here, an agent has access to two sets of trajectories: labelled trajectories containing state, action, reward triplets at every timestep, along with unlabelled trajectories that contain only state and reward information. For this setting, we develop a simple meta-algorithmic pipeline that learns an inversedynamics model on the labelled data to obtain proxy-labels for the unlabelled data, followed by the use of any offline RL algorithm on the true and proxy-labelled trajectories. Empirically, we find this simple pipeline to be highly successful — on several D4RL benchmarks (Fu et al., 2020), certain offline RL algorithms can match the performance of variants trained on a fully labelled dataset even when we label only 10% trajectories from the low return regime. Finally, we perform a large-scale controlled empirical study investigating the interplay of data-centric properties of the labelled and unlabelled datasets, with algorithmic design choices (e.g., inverse dynamics, offline RL algorithm) to identify general trends and best practices for training RL agents on semi-supervised offline datasets. 1 INTRODUCTION One of the key challenges with deploying reinforcement learning (RL) agents is its prohibitive sample complexity for real-world applications. Offline reinforcement learning (RL) can significantly reduce the sample complexity by exploiting logged demonstrations from auxiliary data sources (Levine et al., 2020). However, contrary to curated benchmarks in use today, the nature of offline demonstrations in the real world can be highly varied. For example, the demonstrations could be misaligned due to frequency mismatch (Burns et al., 2022), use of different sensors, actuators, or dynamics (Reed et al., 2022; Lee et al., 2022), or lacking partial state (Ghosh et al., 2022; Rafailov et al., 2021; Mazoure et al., 2021), or reward information (Yu et al., 2022). Successful offline RL in the real world requires embracing these heterogeneous aspects for maximal data efficiency, similar to learning in humans. In this work, we propose a new semi-supervised setup for offline RL. Standard offline RL assumes trajectories to be sequences of observations, actions, and rewards. However, many data sources, such as videos or third-person demonstrations lack direct access to actions. Hence, we propose a semi-supervised setup, where an agent’s offline dataset also consists of action-unlabelled trajectories in addition to the aforementioned (action-labelled) trajectories. Standard offline RL algorithms, such as Conservative Q Learning (CQL; Kumar et al. (2020)) or Decision Transformer (DT; Chen et al. (2021)), cannot directly operate on such unlabelled trajectories. At the same time, naively throwing out the unlabelled trajectories can be wasteful, especially when they have high returns. Our goal in this work is to enable compute and data efficient learning with additional action-unlabelled trajectory logs. Unlike traditional semi-supervised learning, our setup has a few key differences. First, we do not assume that the distribution of the labelled and unlabelled trajectories are necessarily identical. In realistic scenarios, we expect these to be different with unlabelled data having higher returns than labelled data e.g., videos of a human professional are easier to obtain than installing actuators for continuous control tasks. We replicate such varied data quality setups in some of our experiments; Figure 1.1 shows an illustration of the difference in returns between the labelled and unlabelled dataset splits for the hopper-medium-expert D4RL dataset. Second, our end goal goes beyond labeling the actions in the unlabelled trajectories, but rather we intend to use the unlabelled data for learning a downstream policy that is better than the behavioral policies used for generating the offline datasets. Hence, there are two kinds of generalization challenges: generalizing from the labelled to the unlabelled data distribution and then going beyond the offline data distributions to get closer to the expert distribution. Regular offline RL is concerned only with the latter. Finally, we are mainly interested in the case where a significant majority of the trajectories in the offline dataset are unlabelled. One motivating example for this setup is learning from videos or third-person demos. There are tremendous amounts of internet videos that can be potentially used to train RL agents, yet they are without action labels and are of varying quality. Our paper seeks to answer the following questions: 1. How can we utilize the unlabelled data for improving the performance of offline RL algorithms? 2. How does our performance vary as a function of data-centric properties, such as the size and return distributions of labelled and unlabelled datasets? 3. How do offline RL algorithms compare in this setup? To answer these questions, we propose a meta-algorithmic pipeline to train policies in the semisupervised setup described above. We call our pipeline Semi-Supervised Offline Reinforcement Learning (SS-ORL). SS-ORL contains three simple and scalable steps: (1) train a multi-transition inverse dynamics model on labelled data, which predicts actions based on transition sequences, (2) fill in proxy-actions for unlabelled data, and finally (3) train an offline RL agent on the combined dataset. Empirically, we instantiate SS-ORL with CQL (Kumar et al., 2020), DT (Chen et al., 2021), and TD3BC (Fujimoto & Gu, 2021) as the underlying offline RL algorithms respectively, and conduct experiments on the D4RL datasets (Fu et al., 2020). We highlight a few predominant trends from our experimental findings below: 1. Given low-quality labelled data, SS-ORL agents can exploit unlabelled data that contains highquality trajectories and thus improve performance. The absolute performance of SS-ORL is close to or even matches that of the oracle agents, which have access to complete action information. 2. When the labelled data quality is high, utilizing unlabelled data does not bring significant benefits. 3. The choice of value vs. behavior cloning based methods can significantly affect performance in the semi-supervised setup. In our experiments, CQL and TD3BC are less sensitive to the missing actions compared to DT. They enjoy better absolute performance when the labelled data is of low quality, and their performance gap relative to the oracle agent is also smaller. See Appendix H for more details. 2 RELATED WORK Offline RL The goal of offline RL is to learn effective policies from fixed datasets which are generated by unknown behavior policies. There are two main categories of model-free offline RL methods: value-based methods and behavior cloning (BC) based methods. Value-based methods attempt to learn the value functions based on temporal difference (TD) updates. There is a line of work that aims to port existing off-policy value-based online RL methods to the offline setting, with various types of additional regularization components that encourage the learned policy to stay close to the behavior policy. Several representive techniques include specifically tailored policy parameterizations (Fujimoto et al., 2019; Ghasemipour et al., 2021), divergence-based regularization on the learned policy (Wu et al., 2019; Jaques et al., 2019; Kumar et al., 2019), and regularized value function estimation (Nachum et al., 2019; Kumar et al., 2020; Kostrikov et al., 2021a; Fujimoto & Gu, 2021; Kostrikov et al., 2021b). Recently, a growing body of work has tried to formulate offline RL as a supervised learning problem (Chen et al., 2021; Janner et al., 2021; Emmons et al., 2021). Compared with the value-based methods, these methods enjoy several appealing properties including algorithmic simplicity and training stability. Generally speaking, these approaches can be viewed as conditional behavior cloning methods (Bain & Sammut, 1995), where the conditioning parameters are related information such as goals or rewards. Similar to value-based methods, these can be extended to the online setup as well (Zheng et al., 2022) and demonstrate excellent performance in hybrid setups involving both offline data and online interactions. Semi-supervised Learning Semi-supervised learning (SSL) is a sub-area of machine learning that studies approaches to train predictors from a small amount of labelled data combined with a large amount of unlabelled data. In supervised learning, predictors only learn from labelled data. However, labelled training examples often require human annotation efforts and are thus hard to obtain, whereas unlabelled data can be comparatively easy to collect. The research on semi-supervised learning spans several decades. One of the oldest SSL techniques, self-training, was originally proposed in the 1960s (Fralick, 1967). There, a predictor is first trained on the labelled data. Then, at each training round, according to certain selection criteria such as model uncertainty, a portion of the unlabelled data is annotated by the predictor and added into the training set for the next round. We refer the readers to Zhu (2005); Chapelle et al. (2006); Ouali et al. (2020); Van Engelen & Hoos (2020) for comprehensive literature surveys. Imitation Learning from Observations There have been several works in imitation learning (IL) which do not assume access to the full set of actions, such as BCO (Torabi et al., 2018a), MoBILE (Kidambi et al., 2021), GAIfO (Torabi et al., 2018b) or third-person IL approaches (Stadie et al., 2017; Sharma et al., 2019). The recent work of Baker et al. (2022) also considered a setup where a small number of labelled actions are available in addition to a large unlabelled dataset. A key difference between our work and these is that the IL setup typically assumes that all trajectories are generated by an expert, unlike our offline setup. Further, some of these methods even permit reward-free interactions with the environment which is not possible in the offline setup. Learning from Videos Closely related to IL from observations, several works (Schmeckpeper et al., 2020b;a) consider training agents with human video demonstrations, which are without action annotations. Distinct from out setup, in those works, the offline observational data (videos) are from a different embodiment. Moreover, the agents can interact with the environment, and can even collect reward information sometimes. 3 SEMI-SUPERVISED OFFLINE REINFORCEMENT LEARNING Preliminaries We model our environment as a Markov decision process (MDP) (Bellman, 1957) denoted by xS,A, p, P,R, γy, where S is the state space, A is the action space, pps1q is the distribution of the initial state, P pst`1|st, atq is the transition probability distribution, Rpst, atq is the deterministic reward function, and γ is the discount factor. At each timestep t, the agent observes a state st P S and executes an action at P A. As a response, the environment moves the agent to the next state st`1 „ P p¨|st, atq, and also returns the agent a reward rt “ Rpst, atq. 3.1 PROPOSED SETUP We assume the agent has access to a static offline dataset Toffline. The dataset consists of trajectories collected by certain unknown policies, which are not necessarily optimal. Let τ denote a trajectory and |τ | denote its length. We assume that all the trajectories in Toffline contain complete rewards and states. However, only a small subset of them contain action labels, while most of the trajectories are missing actions. We are interested in learning a policy by leveraging the offline dataset without interacting with the environment. This setup is analogous to semi-supervised learning, where actions serve the role of labels. Hence, we also refer to the complete trajectories as labelled data (denoted by Tlabelled) and the action-free trajectories as unlabelled data (denoted by Tunlabelled). Further, we assume the labelled data are sampled from a distribution Plabelled and the unlabelled data are sampled from Punlabelled. In general, the two distributions can be different. Practically, one case we are particularly interested in is when Plabelled generates low-to-moderate quality trajectories, whereas Punlabelled generates trajectories of diverse qualities including ones with high returns. Our setup shares some similarities with state-only imitation learning (Ijspeert et al., 2002; Bentivegna et al., 2002; Torabi et al., 2019) in the use of action-unlabelled trajectories. However, there are also some key differences. In state-only IL, the unlabelled demonstrations are from the same distribution as the labelled demonstrations and correspond to a near-optimal expert policy. In our setting, both Algorithm 1: Semi-supervised offline RL (SS-ORL) 1 Input: trajectories Tlabelled and Tunlabelled, IDM transition size k, offline RL method ORL // train a stochastic multi-transition IDM using the labelled data 2 pθ Ð argminθ Eat,st´k:t`k`1„Tlabelled r´ log ϕθpat|st´k:t`k`1qs // fill in the proxy actions for the unlabelled data 3 Tproxy Ð ∅ 4 for each trajectory τ P Tunlabelled do 5 pat Ð mean of N ` µ pθpst´k:t`k`1q, Σpθpst´k:t`k`1q ˘ , t “ 1, . . . , |τ | 6 τproxy Ð τ with proxy actions tpatu|τ |t“1 filled in 7 Tproxy Ð Tproxy Ť tτproxyu // train an offline RL agent using the combined data 8 π Ð policy obtained by training ORL using dataset Tlabelled Ť Tproxy 9 Output: π Plabelled and Punlabelled can be different from each other and also from the expert policy. Further, many state-only imitation learning algorithms (e.g., Gupta et al. (2017); Torabi et al. (2018a;b); Liu et al. (2018); Sermanet et al. (2018)), similar to their original counterparts (e.g., Ho & Ermon (2016); Kim et al. (2020)), permit (reward-free) interactions with the environments. This is not possible in our proposed offline semi-supervised setup where the agents are only provided with Tlabelled and Tunlabelled. 3.2 TRAINING PIPELINE RL policies trained on low to moderate quality offline trajectories are often sub-optimal, as many of the trajectories might not have high return and only cover a limited part of the state space. Our goal is to find a way to combine the action labelled trajectories and the unlabelled action-free trajectories, so that the offline agent can exploit structures in the unlabelled data to improve performance. One natural strategy is to fill in proxy actions for those unlabelled trajectories, and use the annotated data together with the labelled data as a whole to train an offline RL agent. Since we assume both the labelled and unlabelled trajectories contain the states, we can train an inverse dynamics model (IDM) ϕ that predicts actions using the states. Once we obtain the IDM, we use it to generate the proxy actions for the unlabelled trajectories. Finally, we combine those proxy-labelled trajectories with the labelled trajectories, and train an agent using the offline RL algorithm of choice. In particular, we propose a stochastic multi-transition IDM (see Section 3.3), which is favored by our experiments. Our meta-algorithmic pipeline is summarized in Algorithm 1. Remarks. The annotation process, which involves training an IDM on the labelled data and generating proxy actions for the unlabelled trajectories, is similar to one step of self-training (Fralick, 1967). A key difference is that in self-training, the predictor is trained in multiple rounds. Once an initial predictor is trained, it is used for obtaining annotations on the unlabelled dataset. Then, a subset of annotated data is selected according to certain criteria, and added into the training set for the next round. As opposed to self-training, we do not retrain the IDM but directly move to the next stage, where we train the agent using the combined data. There are a few reasons that we do not employ self-training for IDM. First, it is computationally expensive to execute multiple rounds of training. More importantly, our end goal is to obtain a downstream policy with improved performance via utilizing the proxy-labelled data. One commonly used data selection criterion for self-training is based on the model uncertainty. There, one adds the proxy-labelled data with sufficiently low predictive uncertainty into the training set for the next round. However, we empirically found that such an uncertainty based augmentation strategy did not improve the performance of SS-ORL agents. See Section 4.3 and Appendix F for the experiment details. 3.3 STOCHASTIC MULTI-TRANSITION INVERSE DYNAMIC MODEL In past work (Pathak et al., 2017), the IDM typically maps two subsequent states pst, st`1q to at. We introduce a multi-transition IDM that predicts at using both transitions before and after timestep t, which we found works better empirically. More precisely, our inverse dynamic model which predicts at using 2k ` 1 transitions, including the current transition pst, st`1q, the previous k transitions that leads to st, and the next k transitions starting from st`1. We call k the transition size parameter. Let st´k:t`k`1 denote the sequence sminp0,t´kq, . . . , st, st`1, . . . , smaxp|τ |,t`k`1qq. Specifically, we model the distribution of at as a multivariate Gaussian distribution with a diagonal covariance matrix: at „ N ` µθpst´k:t`k`1q, Σθpst´k:t`k`1q ˘ . (1) Let ϕθpat|st´k:t`k`1q be the probability density function of N ` µθpst´k:t`k`1q, Σθpst´k:t`k`1q ˘ . Given the labelled trajectories Tlabelled, we minimize the negative log-likelihood loss Eat,st´k:t`k`1„Tlabelled r´ log ϕθpat|st´k:t`k`1qs. Note that the standard IDM which predicts at from pst, st`1q under the ℓ2 loss, is a special case subsumed by our model: it is equivalent to the case k “ 0 and the diagonal entries of Σθ (i.e., the variances of each action dimension) are all the same. Choosing k ą 0 allows us to account for non-Markovian behaviour policies and partially observable MDP (POMDP), see Appendix E.1. For all the experiments in this paper, we use k “ 1. We ablate this design choice in Section 4.3. 4 EXPERIMENTS Our experiments aim to answer three primary questions: 1. Can SS-ORL closely track or even match the performance for fully supervised offline reinforcement learning, when only a small subset of trajectories are labelled? 2. How does the performance of SS-ORL vary as a function of the size and qualities of the labelled and unlabelled datasets? 3. Do different offline RL methods respond differently under varying setups of data size and qualities? To answer these questions, we focus on three Gym locomotion tasks hopper, walker, and halfcheetah, and we use the v2 medium-expert, medium and medium-replay datasets1 from the D4RL benchmark (Fu et al., 2020). We address the first question in Section 4.1 and the other two in Section 4.2, respectively. Finally, we discuss the design choices for SS-ORL in Section 4.3. 4.1 BENCHMARKING Data Setup For a given offline dataset, we subsample 10% of the total trajectories from the dataset, whose returns are from the bottom q%, 10 ď q ď 100. We keep the actions for those trajectories, and discard the actions for the rest. We call this setup the coupled setup, since Plabelled and Punlabelled will change simultaneously when we vary the value of q. When q “ 100, we are uniformly sampling the trajectories and we have Plabelled “ Punlabelled. Under this setup, we always have 10% trajectories labelled and 90% unlabelled, and the total amount of data used later for training the offline RL agent is the original offline dataset size. This allows us to easily compare our results with results under the standard, fully labelled setup. In Section 4.2, we shall decouple the distributions Plabelled and Punlabelled for a thorough understanding of their individual influences. Inverse Dynamic Model We train an IDM as described in Section 3 with parameter k “ 1. In other words, the IDM predicts at using 3 consecutive transitions: pst´1, st, st`1, st`2q. The mean and the covariance matrix are predicted by two independent multilayer perceptrons (MLPs), each contains two hidden layers and 1024 hidden units per layer. To prevent overfitting, we randomly sample 10% of the labelled trajectories as the validation set, and use the IDM that yields the best validation error within 100k training iterations. Offline RL Methods We instantiate Algorithm 1 with DT, CQL and TD3BC (Fujimoto & Gu, 2021) and test their performances. Among these methods, DT is a recently proposed conditional behavior cloning (BC) method that uses sequence modeling tools to model the trajectories; CQL is a representative value-based offline RL method; and TD3BC is a hybrid method which adds a BC term to regularize the Q-learning updates. We refer to those instantiations as SS-DT, SS-CQL and SS-TD3BC, respectively. We defer the implementation details to Appendix A. Results We compare the performance of those SS-ORL agents with corresponding baseline and oracle agents. The baseline agents are trained on the labelled trajectories only, and the oracle agents are trained on the full offline dataset with action labels. Intuitively, the performances of the baseline and the oracle agents can be considered as the (estimated) lower and upper bounds for the performance of the SS-ORL agents. For each method, we train 5 instances under different seeds, and 1Due to the space limit, the results on medium and medium-replay datasets are deferred to Appendix C. for each instance we run 30 evaluation trajectories. We report the average return and the standard deviation after 200k iterations. Figure 4.1 plots the results on medium-expert datasets. For all the three environments and all the three offline RL methods, the SS-ORL agents improve upon the baselines. Remarkably, even when the labelled data quality is low, the SS-ORL agents are able to obtain decent returns. For example, when q “ 10, i.e., the labelled trajectories are the bottom 10% trajectories, the average return obtained by SS-TD3BC is 0.93, 0.91 and 0.79 for hopper, walker and halfcheetah. On average, this is 87.4% relative to the oracle performance (1.02, 1.1 and 0.89). As the value q increases, the labelled data quality increases and the distributions Plabelled and Punlabelled are getting closer. The performance of the SS-ORL agents also keeps increasing and finally matches the performance of the oracle agents. Similar observations can be found in the results of medium and medium-replay datasets, see Figure C.1 and C.2. We found relatively suboptimal results for DT on halfcheetah in all cases, consistent with prior results in Zheng et al. (2022). 4.2 ABLATION STUDY We conduct experiments to understand the semi-supervised approach from the perspective of both datasets and learning algorithms. For a systematic study, we depart from the coupled setup in Section 4.1 and consider a decoupling of the labelled data distributions Plabelled and the unlabelled data distribution Punlabelled. We first vary the quality of the labelled and unlabelled trajectories, and examine how the final performance of those SS-ORL agents changes. Next, we vary the size of the labelled and unlabelled trajectories and investigate their influences. To understand how the value-based methods and the BC methods will potentially react differently under these data setups. we report the results of SS-CQL and SS-DT for the aforementioned setups. Last, we ablate the design choice of the transition size k for the proposed IDM. For all the experiments we present, the results are aggregated over 5 instances with different seeds. Quality of Unlabelled Data We divide the trajectories from the hopper-medium-expert dataset into 3 groups, which consist of trajectories whose returns are the bottom 0% to 33%, 33% to 67%, and 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | labelled~High baseline (labeled data only) unlabelled~Low oracle unlabelled~Med oracle unlabelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | labelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | labelled~Low 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | labelled~High baseline (labeled data only) unlabelled~Low oracle unlabelled~Med oracle unlabelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | labelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | labelled~Low (a) We fix the labelled data quality and vary the unlabelled data quality. When the labelled data quality is low or moderate, SS-ORL can significantly improve the performance upon the baselines by utilizing high quality unlabelled data. 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | unlabelled~High labelled~Low oracle labelled~Med oracle labelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | unlabelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | unlabelled~Low 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | unlabelled~High labelled~Low oracle labelled~Med oracle labelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | unlabelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | unlabelled~Low (b) We fix the unlabelled data quality and vary the labelled data quality. The performance of SS-ORL improves as the labelled data quality increases. Figure 4.2: The return (average and standard deviation) of SS-DT and SS-CQL agents trained on the hopper-medium-expert dataset, when the qualities of the labelled and unlabelled data vary. Both the sizes of the labelled and unlabelled data are 10% of the offline dataset size. 67% to 100%, respectively. We refer to them Low, Medium, and High quality groups. In particular, the High group contains trajectories generated by the expert agents (Fu et al., 2020). As before, we report the performance of DT and CQL agents trained on the labelled data only as the baselines. We also report the results under the oracle mode, where we fill in the unlabelled trajectories with the true actions, and combine them with the labelled trajectories to train offline RL agents. We first report the performance of SS-DT when the labelled data is sampled from the High group, and the unlabelled data are sampled from Low, Med, and High groups, respectively. Both the size of the labelled and unlabelled trajectories are 10% of the total offline dataset size. The top left panel of Figure 4.2a plots the results. Clearly, when the labelled data quality is high, training on the labelled data only is sufficient to achieve the expert performance, and adding unlabelled data does not bring extra benefits. We repeat the same experiment when the labelled data is sampled from Medium and Low, see the top middle and top right panels of Figure 4.2a. For those cases, adding unlabelled data with the higher or the same quality2 improves the performance, whereas the lower quality unlabelled 2When the labelled and unlabelled data are sampled from the same quality group, we are simply adding more data from the same distribution. data is not significantly helpful. The performance of SS-CQL follows the same trends, see the bottom panels of Figure 4.2a. To summarize, the experiments provide strong evidence that when the labelled data is of low or moderate quality, SS-ORL is capable to exploit the high quality unlabelled data and remarkably boosts the performance compared with the baselines. The resulting performance is close to that of the oracle agent, and is often optimal (at least 1) or near-optimal (close to 1). Quality of Labelled Data Similarly, we fixed the unlabelled data quality and vary the quality of the labelled data. Figure 4.2b shows the results. For both SS-DT and SS-CQL, increasing the labelled data quality raises the performance for all the cases. Size of Labelled Data We train SS-ORL agents where we fix the number of unlabelled trajectories to be 10% of the total number of offline trajectories, and vary the number of labelled trajectories as 10%, 25%, and 50% of the total size. Similar to the above experiments, we consider four data quality setups, where the labelled and unlabelled trajectories are sampled from the bottom half (denoted by L) and top half (denoted by T) trajectories, respectively. We consider both hopper-medium-expert and walker-medium-expert datasets. To take account of different environments and data setups, we report the 95% stratified bootstrap confidence intervals (CIs) of the interquartile mean3 of the return for all these cases and training instances (Agarwal et al., 2021). We use 50000 bootstrap replications to generate the CIs. Compared with some other statistics like the mean or the median, the IQM is robust to outliers and also a good representative of the overall performance. The stratified bootstrapping is a handy tool to obtain CIs with descent coverage rate, even if one only have a small number of training instances per setup. We refer the readers to Agarwal et al. (2021) for the complete introduction. Figure 4.3a plots the confidence intervals when we consider all four quality setups, or when the labelled data quality is low or high, respectively. We found that SS-DT and SS-CQL respond slightly differently. Overall, SS-CQL is almost immune to changes in the size of the labelled data, as is SS-DT when the labelled data quality is high. However, SS-DT’s performance moderately increases as the labelled size grows when the labelled data quality is low. More detailed results, including the plots of the evaluation curves, CIs of the mean and the median, can be found in Appendix D. Size of Unlabelled Data As before, we vary the number of labelled data size with the unlabelled data size fixed, and report the 95% stratified bootstrap CIs in Figure 4.3b. Similarly, SS-CQL is almost insensitive whereas SS-DT is sensitive when the labelled quality is low. 3The interquartile mean of a list of sorted numbers is the mean of the middle 50% numbers. Value-based vs. Conditional BC As discussed above, SS-CQL is insensitive to the data size changes, whereas SS-DT is more responsive when the labelled data quality is low. Regarding the data quality, we are mostly interested in the scenarios where the labelled data quality is low or moderate, see the red and blue curves of Figure 4.2b. In that regime, if the unlabelled data quality is high (the left column), the distribution shift from the labelled data to the unlabelled data is challenging to handle, and the proxy-actions predicted by the IDM will be less accurate. There, the absolute performance of SS-CQL is slightly better than SS-DT, with smaller performance gaps compared to the oracle agents. If the unlabelled data quality is moderate or low (the middle and right columns), SS-CQL clearly outperforms SS-DT. Both observations suggest that SS-CQL is less sensitive to the action quality. 4.3 DESIGN CHOICES Transition Size k of the IDM We train SS-TD3BC, SS-CQL and SS-DT agents on the hopper-medium-expert dataset, under the coupled setup as in Section 4.1. We consider 6 different values of q: 10, 30, 50, 70, 90 and 100. For all the 18 setups (3 SS-ORL agents and 6 different q values), we train the agents using the multi-transition IDM with k “ 0, 1, 2 respectively. As in the previous section, Figure E.1 plots the 95% stratified bootstrapped CIs for the IQM return across all the setups and training instances, which are generated by 50000 bootstrap replications. The results favors the choice k “ 1. See Appendix E for more experiment details, such as the average return for each setup and the CIs for the return mean. Data Augmentation Strategy As discussed in Section 3.2, we consider variants of SS-TD3BC and SS-DT using uncertainty based data augmentation. Following Lakshminarayanan et al. (2017), we train an ensemble of 3 independent IDMs on Tlabelled. We generate the proxy actions for the unlabelled trajectories using the combined model, and also estimate the predictive uncertainties. We then only add proxy-labelled data whose uncertainties are below p% to the final RL training dataset. Specifically, we test 4 values of p: 25, 50, 75 and 95. We compare the results with standard SS-TD3BC and SS-DT where all the proxy-labelled data are added into the final RL training dataset. Again, we consider both the hopper-medium-expert and walker-medium-expert datasets and use the coupled setup under with 4 different q values: 10, 30, 68, 75 for hopper-medium-expert and 10, 30, 54, 60 for walker-medium-expert. Figure 4.5 plots the 95% stratified bootstrap CIs of the IQM return across all the setups. Adding all the proxy-labelled data without filtering outperforms uncertainty based data augmentation; see Appendix F for more details. Intuitively, to make use of the unlabelled data, most of the SSL pipelines would assume Plabelled and Punlabelled are similar or even the same (Chapelle et al., 2006). This is not the case in our setup, where Plabelled only generates low return trajectories, and all the high return ones come from Punlabelled. It remains an open question if self-training with the uncertainty based selection rule can help us generalize to high return trajectories. 5 DISCUSSION We proposed a novel setup for offline RL where the trajectories do not have all of the action information, for which we have introduced a semi-supervised meta-algorithmic pipeline. Our experiments identified key properties that enable the agents to learn from unlabelled data and show that near-optimal learning can be done with only 10% of the actions labelled for low-to-moderate quality trajectories. It would be interesting to study other heterogeneous data setups for offline RL in the future, including reward-free or pure state-only settings. This work is a step towards a broader goal of empowering robotic systems with the ability to extract meaningful knowledge from copious and ever-growing amounts of unlabelled demonstration data. Beyond simply not having the action labels, many trajectories may be from different robotic systems or tasks and therefore are not directly transferable to the system and task at hand. As we continue building robotic systems to leverage these forms of auxiliary knowledge, we expect that weakly-supervised learning paradigms such as the one explored in this work will be useful. A EXPERIMENT DETAILS In this section, we provide more details about our experiments. For all the offline RL methods we consider, we use our own implementations adopted from the following codebases: DT https://github.com/facebookresearch/online-dt TD3BC https://github.com/sfujim/TD3_BC CQL https://github.com/scottemmons/youngs-cql We use the stochastic DT proposed by Zheng et al. (2022). For offline RL, its performance is similar to the deterministic DT (Chen et al., 2021). The policy parameter is optimized by the LAMB optimizer (You et al., 2019) with ε “ 10´8. The log-temperature parameter is optimized by the Adam optimzier (Kingma & Ba, 2014). The architecture and other hyperparameters are listed in Tabel A.1. For TD3BC, we optimize both the critic and actor parameters by the Adam optimizer. The complete hyperparameters are listed in Table A.2. For CQL, we also use the Adam optimizer to optimize the critic, actor and the log-temperature parameters. The architecture of critic and actor networks and the other hyperparameters are listed in Table A.3. We use batch size 256 and context length 20 for DT, where each batch contains 5120 states. Correspondingly, we use batch size 5120 for CQL and TD3BC. B THE RETURN DISTRIBUTIONS OF THE D4RL DATASETS C ADDITIONAL EXPERIMENTS UNDER THE COUPLED SETUP We conduct experiments on the medium and medium-replay datasets of D4RL benchmark, using the same setup as in Section 4.1. Figure C.1 and C.2 reports the results. The general trend is as the same as that in Figure 4.1. We note that the results on the halfcheetah-medium dataset are less informative than the others. This is because the data distributions of halfcheetah-medium is very concentrated, similar to a Gaussian distribution with small variance, see Figure B.1. In such a case, varying the value of q does not drastically change the labelled data distribution. One may notice that for the hopper-medium-replay and walker-medium-replay datasets, SS-ORL does not catch up with the oracle as quickly as on the other datasets as q increases. Our intuition is that the return distributions of these two datasets concentrate on extremely low values, as shown in Figure B.1. In our experiments, the labelled trajectories for those two datasets have average return small than 0.1 even when q “ 70. In contrast, the return distributions of the other datasets concentrate on larger values. For the halfcheetah-medium-replay and all the medium and medium-expert datasets, increasing the value of q will greatly change the returns of labelled trajectories, see Table C.1. To demonstrate the performance of SS-ORL on dataset with a more wide return distribution, we consider a subsampled dataset for the walker environment generated as follows. 1. Combine the walker-medium-replay and walker-medium datasets. 2. Let Rmin and Rmax denote the minimum and maximum return in the dataset. We divide the trajectories into 40 bins, where the maximum returns within each bin are linear spaced between Rmin and Rmax. Let ni be the number trajectories in bin i. 3. We randomly sample 1000 trajectories. To sample a trajectory, we first first sample a bin i P r1, . . . , 40s with weights proportional to 1{ni, then sample a trajectory uniformly at random from the sampled bin. Figure C.3 plots the return distribution of the subsampled dataset. It is wide and has 3 modes. We run the same experiments as before on this subsampled dataset, and Figure C.4 plots the results. We can see that SS-ORL methods can catch up with the oracle agents even when q is small. D INFLUENCES OF THE LABELLED AND UNLABELLED DATA SIZE Figure D.1 plots the average return of SS-DT and SS-CQL when we fix the number of unlabelled trajectories and vary the number of labelled trajectories. We found that there is a bad seed for SS-CQL when both labelled and unlabelled trajectories are sampled from L and the size of labelled data is 10%, so that the result there the bottom right panel) exhibits large variance. Correspondingly, Figure D.2 plots the 95% stratified bootstrap CIs for the median, mean, interquartile mean, and the optimality gap of the return of SS-DT and SS-CQL. Similarly, Figure D.3 and D.4 plots the results when we vary the number of unlabelled trajectories, while the number of unlabelled ones is fixed. E TRANSITION SIZE k FOR THE MULTI-TRANSITION INVERSE DYNAMIC MODEL E.1 THEORY Let β denote the behaviour policy. When k “ 0, the IDM is modeling Ppat|st`1, stq “ Ppat, st`1|stq Ppst`1|stq “ Ppst`1|at, stqβpat|stq Ppst`1|stq . (2) For the cases where k ą 0, w.l.o.g, we assume k “ 1. The IDM is modeling Ppat|st`2, st`1, st, st´1q “ Ppat, st`2, . . . , st´1q Ppst`2, . . . , st´1q “ Ppst`1|at, st, st`2, st´1qPpat|st`2, st, st´1q{Ppst`1|st`2, st, st´1q “ Ppst`1|at, stqβpat|st, st´1q{Ppst`1|st, st´1q, (3) where in the last line we used the fact that the policy β can only generate actions based on previous states, the Markovian transition property Ppst`1|at, st, st`2, st´1q “ Ppst`1|at, stq, and also the induced property Ppst`1|st, st´1q “ Ppst`1|st`2, st, st´1q. If the behaviour policy β is Markovian, we have that βpat|stq “ βpat|st, st´1q, and as a consequence Ppat|st`1, stq “ Ppat|st`2, st`1, st, st´1q ¨ Cpst`1, st, st´1q, (4) where C “ Ppst`1|stqPpst`1|st,st´1q is independent of the action at. Therefore, the probabilities that the IDMs with k “ 0 and k “ 1 are modeling, are equivalent up to a state-only dependent scaling. The cases where k ě 2 can be derived analogously. In practice, the offline dataset might contain trajectories generated by multiple behaviour policies and it is unknown if any of them is Markovian. Therefore, choosing k ą 0 allows us to take into account past information before timestep t. For future, we do not need anything beyond st`1 for a MDP, but our formulation is general purpose to account for POMDP as well, where both past and future partial observations might be needed to infer the action at. To summarize, choosing k ě 0 is more general and have been shown to be favorable in the empirical experiments presented in next section. E.2 EMPIRICAL EXPERIMENTS We train SS-TD3BC, SS-CQL and SS-DT with 3 IDM transition size: k = 0, 1 and 2 on the hopper-medium-expert dataset. We use the coupled setup described in Section 4.1, with 6 different values of q. Table E.1 reports the performance of those agents for each case. In addition to the interquartile mean considered in Section 4.3, we also consider 3 other statistics of the return across all the setups: the mean, the median and the optimality gap. Figure 4.3 plots the 95% stratified bootstrap confidence intervals for all the four statistics, genereated by 50000 bootstrap replications. q “ 10 q “ 30 q “ 50 q “ 70 q “ 90 q “ 100 Average k “ 0 0.81 ˘ 0.12 0.89 ˘ 0.05 0.93 ˘ 0.05 1.05 ˘ 0.04 1.03 ˘ 0.06 1.01 ˘ 0.04 0.95 SS-TD3BC k “ 1 0.93 ˘ 0.07 1.01 ˘ 0.05 0.86 ˘ 0.06 0.98 ˘ 0.06 1.03 ˘ 0.06 1.03 ˘ 0.04 0.98 k “ 2 0.80 ˘ 0.12 0.91 ˘ 0.03 0.93 ˘ 0.05 0.95 ˘ 0.08 1.01 ˘ 0.06 1.04 ˘ 0.02 0.94 k “ 0 0.69 ˘ 0.17 0.69 ˘ 0.15 0.88 ˘ 0.15 1.04 ˘ 0.04 1.11 ˘ 0.01 1.10 ˘ 0.03 0.92 SS-CQL k “ 1 0.69 ˘ 0.15 0.90 ˘ 0.05 0.89 ˘ 0.13 1.03 ˘ 0.07 1.07 ˘ 0.08 1.11 ˘ 0.01 0.95 k “ 2 0.90 ˘ 0.11 0.90 ˘ 0.09 0.86 ˘ 0.11 1.08 ˘ 0.05 1.10 ˘ 0.01 1.11 ˘ 0.01 0.99 k “ 0 0.72 ˘ 0.17 0.75 ˘ 0.20 0.90 ˘ 0.14 1.06 ˘ 0.04 1.11 ˘ 0.00 1.11 ˘ 0.01 0.94 SS-DT k “ 1 0.69 ˘ 0.20 0.94 ˘ 0.07 0.99 ˘ 0.05 1.05 ˘ 0.04 1.11 ˘ 0.00 1.11 ˘ 0.00 0.98 k “ 2 0.78 ˘ 0.07 0.89 ˘ 0.08 0.85 ˘ 0.15 1.05 ˘ 0.02 1.11 ˘ 0.00 1.11 ˘ 0.00 0.97 Table E.1: The return (average and standard deviation) of SS-ORL agents trained on the hoppermedium-expert dataset under the coupled setup, where the IDM is trained with 2 different values of k: 0, 1 and 2. Results aggregated over 5 training instances. 0.92 0.96 1.00 1.04 k=0 k=1 k=2 Median 0.92 0.96 1.00 1.04 IQM 0.92 0.96 1.00 1.04 Mean 0.06 0.09 0.12 0.15 Optimality Gap Normalized Return Figure E.1: The 95% stratified bootstrap CIs of four statistics (the median, mean, interquartile mean, and the optimality gap) of the returns obtained by SS-ORL agents, with different values of k. F DATA AUGMENTATION STRATEGY Following Lakshminarayanan et al. (2017), we train an ensemble of 3 independent IDMs on Tlabelled. Each individual IDM models the action as a diagonal Gaussian distribution (see Equation (1)) N pµi,Σiq, i “ 1, 2, 3. The ensemble models the action using a equally weighted Gaussian mixture of these three distributions. We predict the action by the mixture’s mean and predict the uncertainty by the mixture’s variance; both can be written in close form. We conduct experiments for SS-DT and SS-TD3BC, where we only add proxy-labelled data whose uncertainties are below p% to the final RL training dataset. Specifically, we test 4 values of p: 25, 50, 75 and 95.:w We compare the results with standard SS-DT and SS-TD3BC where all the proxy-labelled data are added into the final RL training dataset. We consider both the hopper-medium-expert and walker-medium-expert datasets. We use the coupled setup described in Section 4.1, where we consider 4 different values of q: 10, 30, 68, 75 for hoppermedium-expert and 10, 30, 54, 60 for walker-medium-expert. Table F.1 reports the average return and standard deviation obtained by SS-DT and SS-TD3BC under different data augmentation strategies, when trained on the hopper-medium-expert dataset. The results on the walker-medium-expert dataset are reported in Table F.2. It is easy to see that uncertainty based data augmentation degrades the performance, compared with adding all the proxylabelled data without filtering. Overall, the latter performs consistently well across different setups. Figure F.1 plots the 95% stratified bootstrap CIs for this experiments. All the statistics favor the no filtering strategy. hopper-medium-expert q “ 10 q “ 30 q “ 68 q “ 75 Average below 25% 0.60 ˘ 0.03 0.62 ˘ 0.02 0.71 ˘ 0.04 0.86 ˘ 0.02 0.70 below 50% 0.62 ˘ 0.02 0.66 ˘ 0.06 0.76 ˘ 0.04 0.86 ˘ 0.09 0.72 SS-TD3BC below 75% 0.70 ˘ 0.06 0.74 ˘ 0.07 0.84 ˘ 0.06 0.94 ˘ 0.08 0.80 below 95% 0.82 ˘ 0.05 0.82 ˘ 0.09 0.90 ˘ 0.09 0.96 ˘ 0.06 0.88 no filtering 0.80 ˘ 0.07 0.92 ˘ 0.04 0.91 ˘ 0.06 0.94 ˘ 0.10 0.89 below 25% 0.61 ˘ 0.12 0.62 ˘ 0.05 0.70 ˘ 0.01 0.95 ˘ 0.13 0.72 below 50% 0.60 ˘ 0.14 0.65 ˘ 0.04 0.69 ˘ 0.02 1.04 ˘ 0.07 0.75 SS-DT below 75% 0.42 ˘ 0.04 0.63 ˘ 0.15 0.75 ˘ 0.06 1.04 ˘ 0.04 0.71 below 95% 0.51 ˘ 0.16 0.82 ˘ 0.12 0.85 ˘ 0.05 1.06 ˘ 0.03 0.81 no filtering 0.47 ˘ 0.14 0.71 ˘ 0.14 0.83 ˘ 0.07 1.06 ˘ 0.03 0.77 Table F.1: The return (average and standard deviation) of SS-ORL agents trained on the hoppermedium-expert dataset under the coupled setup, using different data augmentation strategies. Results aggregated over 5 training instances. walker-medium-expert q “ 10 q “ 30 q “ 54 q “ 60 Average below 25% 0.82 ˘ 0.02 0.82 ˘ 0.01 0.80 ˘ 0.06 1.04 ˘ 0.06 0.87 below 50% 0.83 ˘ 0.03 0.84 ˘ 0.02 0.84 ˘ 0.01 1.02 ˘ 0.09 0.88 SS-TD3BC below 75% 0.74 ˘ 0.11 0.86 ˘ 0.01 0.85 ˘ 0.01 1.04 ˘ 0.07 0.87 below 95% 0.86 ˘ 0.04 0.88 ˘ 0.01 0.87 ˘ 0.01 1.10 ˘ 0.01 0.93 no filtering 0.86 ˘ 0.05 0.86 ˘ 0.03 0.87 ˘ 0.01 1.10 ˘ 0.01 0.92 below 25% 0.69 ˘ 0.04 0.74 ˘ 0.02 0.70 ˘ 0.03 0.84 ˘ 0.17 0.74 below 50% 0.67 ˘ 0.03 0.72 ˘ 0.02 0.73 ˘ 0.03 0.95 ˘ 0.15 0.77 SS-DT below 75% 0.71 ˘ 0.03 0.60 ˘ 0.13 0.73 ˘ 0.03 0.95 ˘ 0.14 0.74 below 95% 0.73 ˘ 0.08 0.52 ˘ 0.11 0.58 ˘ 0.15 0.98 ˘ 0.10 0.70 no filtering 0.79 ˘ 0.05 0.55 ˘ 0.13 0.69 ˘ 0.08 0.91 ˘ 0.15 0.74 Table F.2: The return (average and standard deviation) of SS-ORL agents trained on the walkermedium-expert dataset under the coupled setup, using different data augmentation strategies. Results aggregated over 5 training instances. 0.75 0.80 0.85 below 25% below 50% below 75% below 95% no filtering Median 0.75 0.80 0.85 Mean 0.76 0.80 0.84 0.88 IQM 0.16 0.20 0.24 Optimality Gap Normalized Return Figure F.1: The 95% stratified bootstrap CIs of four statistics (the median, mean, interquartile mean, and the optimality gap) of the returns obtained by SS-ORL agents, when combined with different data augmentation strategies. G COMPARISON WITH GATO UNDER THE COUPLED SETUP Inspired the multi-task and multi-modal generalist agent proposed by Reed et al. (2022), we consider a GATO type of variant of DT that can incorporate the unlabelled data into policy training. GATO is trained on the labelled and unlabelled data simultaneously. The implementation details are: • We form the same input sequence as DT, where we fill in zeros for the missing actions for unlabelled trajectories. • For the labelled trajectories, GATO predicts the actions, states and rewards; for the unlabelled ones, GATO only predicts the states and rewards. • We use the stochastic policy as in online decision transformer (Zheng et al., 2022) to predict the actions. • We use deterministic predictors for the states and rewards, which are single linear layers built on top of the Transformer outputs. Let gt “ ř|τ |rt1 t1“t be the return-to-go of a trajectory τ at timestep t. Let H Plabelled θ denotes the policy entropy included on the labelled data distribution. For simplicity, we assume the context length of GATO is 1. We refer the readers to Zheng et al. (2022) for the formulation with a general context length and more details. Equation (5) shows the training objective of GATO. min θ Epat,st,rt,gtq„Plabelled ␣ ´ log πpat|st, gt, θq ` λs}st ´ pstpθq}22 ` λr}rt ´ prtpθq}22 ( ` Epst,rt,gtq„Punlabelled ␣ λs}st ´ pstpθq}22 ` λr}rt ´ prtpθq}22 ( s.t. HPlabelledθ ra|s, gs ě ν (5) 20 40 60 80 100 q (labelled data quality param) 0.4 0.6 0.8 1.0 no rm al ize d re tu rn hopper-medium-expert SS-DT DT baseline GATO DT oracle 20 40 60 80 100 q (labelled data quality param) 0.4 0.6 0.8 1.0 no rm al ize d re tu rn hopper-medium-expert SS-CQL SS-DT SS-TD3BC GATO Figure G.1: (The performance of SS-ORL and GATO on the hopper-medium-expert dataset. For GATO, we use λs “ 0.01 and λr “ 1.0. (L) SS-DT significantly outperforms GATO, where GATO only slightly improves upon the baseline. (R) SS-CQL, SS-DT and SS-TD3BC all outperform GATO. 0.60 0.64 0.68 s = 1.0, r = 1.0 s = 1.0, r = 0.1 s = 1.0, r = 0.01 s = 1.0, r = 0.001 s = 0.1, r = 1.0 s = 0.1, r = 0.1 s = 0.1, r = 0.01 s = 0.1, r = 0.001 s = 0.01, r = 1.0 s = 0.01, r = 0.1 s = 0.01, r = 0.01 s = 0.01, r = 0.001 s = 0.001, r = 1.0 s = 0.001, r = 0.1 s = 0.001, r = 0.01 s = 0.001, r = 0.001 IQM Normalized Return Figure G.2: The 95% stratified bootstrap CIs of four statistics (the median, mean, interquartile mean, and the optimality gap) of the returns obtained by GATO agents, with different combinations of regularization parameters. The constant ν, λs and λr are prefixed hyper-parameters, where ν is the target policy entropy, and λs and λr are regularization parameters used to balance the losses for actions, states, and rewards. We use ν “ ´dimpAq as for DT (see Appendix A). To choose the regularization parameters λs and λr for GATO, we test 16 combinations where λs and λr are 1.0, 0.1, 0.01 and 0.001 respectively. We run experiments as in Section 4.1 for q “ 10, 30, 50, 70, 90, 100, and compute the confidence intervals for the aggregated results. Figure G.2 shows that λs “ 0.01 and λr “ 0.1 yield the best performance. Figure G.1 compares the performance of GATO (with λs “ 0.01 and λr “ 0.1) and SS-ORL agents. It is clear that SS-ORL agents outperform GATO. H PERFORMANCE GAP OF SS-ORL AGENTS For a chosen offline RL method, the relative performance gap between the corresponding SS-ORL and oracle agents illustrates how sensitive this offline RL is to missing actions: Oracle-ORL ´ SS-ORL Oracle-ORL . (6) We consider the coupled setup as in Section 4.1. For each of the 9 datasets (hopper,walker,halfcheetah with medium-expert, medium, and medium-replay datasets), we compute the relative performance gap for SS-CQL, SS-DT and SS-TD3BC, trained with 6 different values of q: 10, 30, 50, 70, 90 and 100. Table H.1 reports the aggregate results over 5 seeds. On average, SS-CQL and SS-TD3BC have smaller relative performance gap, suggesting that CQL and TD3BC are less sensitive to the missing actions. method hopper-me walker2d-me hc-me hopper-m walker2d-m hc-m hopper-mr walker2d-mr hc-mr Average SS-CQL 0.147 0.114 0.062 0.078 0.077 0.003 0.388 0.379 0.106 0.150 SS-TD3BC 0.046 0.094 0.104 0 0.065 0.001 0.327 0.412 0.057 0.123 SS-DT 0.119 0.167 0.0002 0.016 0.039 0.003 0.399 0.554 0.109 0.156 Table H.1: The relative performance gap of SS-CQL, SS-TD3BC, and SS-DT.
1. What is the focus and contribution of the paper regarding semi-supervised learning? 2. What are the strengths of the proposed approach, particularly in its practical motivation and experimental design? 3. Do you have any concerns or questions about the model's encoding mechanism? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any limitations or potential improvements regarding the method's reliance on high-quality data?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces a new, practically motivated semi-supervised setting, where the agent can access both labeled trajectories and unlabelled trajectories that do not include the actions of the trajectories. A model is trained to give actions for unlabeled data and then the offline RL method is trained by the whole dataset. Experiments based on D4RL dataset show the good performance of the proposed method. Strengths And Weaknesses The paper is well-written and easy to follow. The setting is novel and meaningful since there are a lot of unlabeled data like videos in real-world scenarios. The experiments are designed carefully to illustrate the influence of data with different qualities. For the weakness, Iwould like to ask some questions: For IDM, the length of the input, k, could be changed, and k=1 in the paper. So the input of the model is four states (s _t-1,..., s_t+1). How does the model encode these states? The experiments only use the expert dataset, which means most trajectories are good. D4RL also provides random and medium-level dataset. As your claim in the paper, the quality of the data has a huge influence on the performance. Is there any analysis based on these data? Could we say that the method also needs high-quality data for both labeled and unlabeled data to achieve good performance? Clarity, Quality, Novelty And Reproducibility The detailed dataset and code of the method are not provided.
ICLR
Title Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories Abstract Natural agents can effectively learn from multiple data sources that differ in size, quality, and types of measurements. We study this heterogeneity in the context of offline reinforcement learning (RL) by introducing a new, practically motivated semi-supervised setting. Here, an agent has access to two sets of trajectories: labelled trajectories containing state, action, reward triplets at every timestep, along with unlabelled trajectories that contain only state and reward information. For this setting, we develop a simple meta-algorithmic pipeline that learns an inversedynamics model on the labelled data to obtain proxy-labels for the unlabelled data, followed by the use of any offline RL algorithm on the true and proxy-labelled trajectories. Empirically, we find this simple pipeline to be highly successful — on several D4RL benchmarks (Fu et al., 2020), certain offline RL algorithms can match the performance of variants trained on a fully labelled dataset even when we label only 10% trajectories from the low return regime. Finally, we perform a large-scale controlled empirical study investigating the interplay of data-centric properties of the labelled and unlabelled datasets, with algorithmic design choices (e.g., inverse dynamics, offline RL algorithm) to identify general trends and best practices for training RL agents on semi-supervised offline datasets. 1 INTRODUCTION One of the key challenges with deploying reinforcement learning (RL) agents is its prohibitive sample complexity for real-world applications. Offline reinforcement learning (RL) can significantly reduce the sample complexity by exploiting logged demonstrations from auxiliary data sources (Levine et al., 2020). However, contrary to curated benchmarks in use today, the nature of offline demonstrations in the real world can be highly varied. For example, the demonstrations could be misaligned due to frequency mismatch (Burns et al., 2022), use of different sensors, actuators, or dynamics (Reed et al., 2022; Lee et al., 2022), or lacking partial state (Ghosh et al., 2022; Rafailov et al., 2021; Mazoure et al., 2021), or reward information (Yu et al., 2022). Successful offline RL in the real world requires embracing these heterogeneous aspects for maximal data efficiency, similar to learning in humans. In this work, we propose a new semi-supervised setup for offline RL. Standard offline RL assumes trajectories to be sequences of observations, actions, and rewards. However, many data sources, such as videos or third-person demonstrations lack direct access to actions. Hence, we propose a semi-supervised setup, where an agent’s offline dataset also consists of action-unlabelled trajectories in addition to the aforementioned (action-labelled) trajectories. Standard offline RL algorithms, such as Conservative Q Learning (CQL; Kumar et al. (2020)) or Decision Transformer (DT; Chen et al. (2021)), cannot directly operate on such unlabelled trajectories. At the same time, naively throwing out the unlabelled trajectories can be wasteful, especially when they have high returns. Our goal in this work is to enable compute and data efficient learning with additional action-unlabelled trajectory logs. Unlike traditional semi-supervised learning, our setup has a few key differences. First, we do not assume that the distribution of the labelled and unlabelled trajectories are necessarily identical. In realistic scenarios, we expect these to be different with unlabelled data having higher returns than labelled data e.g., videos of a human professional are easier to obtain than installing actuators for continuous control tasks. We replicate such varied data quality setups in some of our experiments; Figure 1.1 shows an illustration of the difference in returns between the labelled and unlabelled dataset splits for the hopper-medium-expert D4RL dataset. Second, our end goal goes beyond labeling the actions in the unlabelled trajectories, but rather we intend to use the unlabelled data for learning a downstream policy that is better than the behavioral policies used for generating the offline datasets. Hence, there are two kinds of generalization challenges: generalizing from the labelled to the unlabelled data distribution and then going beyond the offline data distributions to get closer to the expert distribution. Regular offline RL is concerned only with the latter. Finally, we are mainly interested in the case where a significant majority of the trajectories in the offline dataset are unlabelled. One motivating example for this setup is learning from videos or third-person demos. There are tremendous amounts of internet videos that can be potentially used to train RL agents, yet they are without action labels and are of varying quality. Our paper seeks to answer the following questions: 1. How can we utilize the unlabelled data for improving the performance of offline RL algorithms? 2. How does our performance vary as a function of data-centric properties, such as the size and return distributions of labelled and unlabelled datasets? 3. How do offline RL algorithms compare in this setup? To answer these questions, we propose a meta-algorithmic pipeline to train policies in the semisupervised setup described above. We call our pipeline Semi-Supervised Offline Reinforcement Learning (SS-ORL). SS-ORL contains three simple and scalable steps: (1) train a multi-transition inverse dynamics model on labelled data, which predicts actions based on transition sequences, (2) fill in proxy-actions for unlabelled data, and finally (3) train an offline RL agent on the combined dataset. Empirically, we instantiate SS-ORL with CQL (Kumar et al., 2020), DT (Chen et al., 2021), and TD3BC (Fujimoto & Gu, 2021) as the underlying offline RL algorithms respectively, and conduct experiments on the D4RL datasets (Fu et al., 2020). We highlight a few predominant trends from our experimental findings below: 1. Given low-quality labelled data, SS-ORL agents can exploit unlabelled data that contains highquality trajectories and thus improve performance. The absolute performance of SS-ORL is close to or even matches that of the oracle agents, which have access to complete action information. 2. When the labelled data quality is high, utilizing unlabelled data does not bring significant benefits. 3. The choice of value vs. behavior cloning based methods can significantly affect performance in the semi-supervised setup. In our experiments, CQL and TD3BC are less sensitive to the missing actions compared to DT. They enjoy better absolute performance when the labelled data is of low quality, and their performance gap relative to the oracle agent is also smaller. See Appendix H for more details. 2 RELATED WORK Offline RL The goal of offline RL is to learn effective policies from fixed datasets which are generated by unknown behavior policies. There are two main categories of model-free offline RL methods: value-based methods and behavior cloning (BC) based methods. Value-based methods attempt to learn the value functions based on temporal difference (TD) updates. There is a line of work that aims to port existing off-policy value-based online RL methods to the offline setting, with various types of additional regularization components that encourage the learned policy to stay close to the behavior policy. Several representive techniques include specifically tailored policy parameterizations (Fujimoto et al., 2019; Ghasemipour et al., 2021), divergence-based regularization on the learned policy (Wu et al., 2019; Jaques et al., 2019; Kumar et al., 2019), and regularized value function estimation (Nachum et al., 2019; Kumar et al., 2020; Kostrikov et al., 2021a; Fujimoto & Gu, 2021; Kostrikov et al., 2021b). Recently, a growing body of work has tried to formulate offline RL as a supervised learning problem (Chen et al., 2021; Janner et al., 2021; Emmons et al., 2021). Compared with the value-based methods, these methods enjoy several appealing properties including algorithmic simplicity and training stability. Generally speaking, these approaches can be viewed as conditional behavior cloning methods (Bain & Sammut, 1995), where the conditioning parameters are related information such as goals or rewards. Similar to value-based methods, these can be extended to the online setup as well (Zheng et al., 2022) and demonstrate excellent performance in hybrid setups involving both offline data and online interactions. Semi-supervised Learning Semi-supervised learning (SSL) is a sub-area of machine learning that studies approaches to train predictors from a small amount of labelled data combined with a large amount of unlabelled data. In supervised learning, predictors only learn from labelled data. However, labelled training examples often require human annotation efforts and are thus hard to obtain, whereas unlabelled data can be comparatively easy to collect. The research on semi-supervised learning spans several decades. One of the oldest SSL techniques, self-training, was originally proposed in the 1960s (Fralick, 1967). There, a predictor is first trained on the labelled data. Then, at each training round, according to certain selection criteria such as model uncertainty, a portion of the unlabelled data is annotated by the predictor and added into the training set for the next round. We refer the readers to Zhu (2005); Chapelle et al. (2006); Ouali et al. (2020); Van Engelen & Hoos (2020) for comprehensive literature surveys. Imitation Learning from Observations There have been several works in imitation learning (IL) which do not assume access to the full set of actions, such as BCO (Torabi et al., 2018a), MoBILE (Kidambi et al., 2021), GAIfO (Torabi et al., 2018b) or third-person IL approaches (Stadie et al., 2017; Sharma et al., 2019). The recent work of Baker et al. (2022) also considered a setup where a small number of labelled actions are available in addition to a large unlabelled dataset. A key difference between our work and these is that the IL setup typically assumes that all trajectories are generated by an expert, unlike our offline setup. Further, some of these methods even permit reward-free interactions with the environment which is not possible in the offline setup. Learning from Videos Closely related to IL from observations, several works (Schmeckpeper et al., 2020b;a) consider training agents with human video demonstrations, which are without action annotations. Distinct from out setup, in those works, the offline observational data (videos) are from a different embodiment. Moreover, the agents can interact with the environment, and can even collect reward information sometimes. 3 SEMI-SUPERVISED OFFLINE REINFORCEMENT LEARNING Preliminaries We model our environment as a Markov decision process (MDP) (Bellman, 1957) denoted by xS,A, p, P,R, γy, where S is the state space, A is the action space, pps1q is the distribution of the initial state, P pst`1|st, atq is the transition probability distribution, Rpst, atq is the deterministic reward function, and γ is the discount factor. At each timestep t, the agent observes a state st P S and executes an action at P A. As a response, the environment moves the agent to the next state st`1 „ P p¨|st, atq, and also returns the agent a reward rt “ Rpst, atq. 3.1 PROPOSED SETUP We assume the agent has access to a static offline dataset Toffline. The dataset consists of trajectories collected by certain unknown policies, which are not necessarily optimal. Let τ denote a trajectory and |τ | denote its length. We assume that all the trajectories in Toffline contain complete rewards and states. However, only a small subset of them contain action labels, while most of the trajectories are missing actions. We are interested in learning a policy by leveraging the offline dataset without interacting with the environment. This setup is analogous to semi-supervised learning, where actions serve the role of labels. Hence, we also refer to the complete trajectories as labelled data (denoted by Tlabelled) and the action-free trajectories as unlabelled data (denoted by Tunlabelled). Further, we assume the labelled data are sampled from a distribution Plabelled and the unlabelled data are sampled from Punlabelled. In general, the two distributions can be different. Practically, one case we are particularly interested in is when Plabelled generates low-to-moderate quality trajectories, whereas Punlabelled generates trajectories of diverse qualities including ones with high returns. Our setup shares some similarities with state-only imitation learning (Ijspeert et al., 2002; Bentivegna et al., 2002; Torabi et al., 2019) in the use of action-unlabelled trajectories. However, there are also some key differences. In state-only IL, the unlabelled demonstrations are from the same distribution as the labelled demonstrations and correspond to a near-optimal expert policy. In our setting, both Algorithm 1: Semi-supervised offline RL (SS-ORL) 1 Input: trajectories Tlabelled and Tunlabelled, IDM transition size k, offline RL method ORL // train a stochastic multi-transition IDM using the labelled data 2 pθ Ð argminθ Eat,st´k:t`k`1„Tlabelled r´ log ϕθpat|st´k:t`k`1qs // fill in the proxy actions for the unlabelled data 3 Tproxy Ð ∅ 4 for each trajectory τ P Tunlabelled do 5 pat Ð mean of N ` µ pθpst´k:t`k`1q, Σpθpst´k:t`k`1q ˘ , t “ 1, . . . , |τ | 6 τproxy Ð τ with proxy actions tpatu|τ |t“1 filled in 7 Tproxy Ð Tproxy Ť tτproxyu // train an offline RL agent using the combined data 8 π Ð policy obtained by training ORL using dataset Tlabelled Ť Tproxy 9 Output: π Plabelled and Punlabelled can be different from each other and also from the expert policy. Further, many state-only imitation learning algorithms (e.g., Gupta et al. (2017); Torabi et al. (2018a;b); Liu et al. (2018); Sermanet et al. (2018)), similar to their original counterparts (e.g., Ho & Ermon (2016); Kim et al. (2020)), permit (reward-free) interactions with the environments. This is not possible in our proposed offline semi-supervised setup where the agents are only provided with Tlabelled and Tunlabelled. 3.2 TRAINING PIPELINE RL policies trained on low to moderate quality offline trajectories are often sub-optimal, as many of the trajectories might not have high return and only cover a limited part of the state space. Our goal is to find a way to combine the action labelled trajectories and the unlabelled action-free trajectories, so that the offline agent can exploit structures in the unlabelled data to improve performance. One natural strategy is to fill in proxy actions for those unlabelled trajectories, and use the annotated data together with the labelled data as a whole to train an offline RL agent. Since we assume both the labelled and unlabelled trajectories contain the states, we can train an inverse dynamics model (IDM) ϕ that predicts actions using the states. Once we obtain the IDM, we use it to generate the proxy actions for the unlabelled trajectories. Finally, we combine those proxy-labelled trajectories with the labelled trajectories, and train an agent using the offline RL algorithm of choice. In particular, we propose a stochastic multi-transition IDM (see Section 3.3), which is favored by our experiments. Our meta-algorithmic pipeline is summarized in Algorithm 1. Remarks. The annotation process, which involves training an IDM on the labelled data and generating proxy actions for the unlabelled trajectories, is similar to one step of self-training (Fralick, 1967). A key difference is that in self-training, the predictor is trained in multiple rounds. Once an initial predictor is trained, it is used for obtaining annotations on the unlabelled dataset. Then, a subset of annotated data is selected according to certain criteria, and added into the training set for the next round. As opposed to self-training, we do not retrain the IDM but directly move to the next stage, where we train the agent using the combined data. There are a few reasons that we do not employ self-training for IDM. First, it is computationally expensive to execute multiple rounds of training. More importantly, our end goal is to obtain a downstream policy with improved performance via utilizing the proxy-labelled data. One commonly used data selection criterion for self-training is based on the model uncertainty. There, one adds the proxy-labelled data with sufficiently low predictive uncertainty into the training set for the next round. However, we empirically found that such an uncertainty based augmentation strategy did not improve the performance of SS-ORL agents. See Section 4.3 and Appendix F for the experiment details. 3.3 STOCHASTIC MULTI-TRANSITION INVERSE DYNAMIC MODEL In past work (Pathak et al., 2017), the IDM typically maps two subsequent states pst, st`1q to at. We introduce a multi-transition IDM that predicts at using both transitions before and after timestep t, which we found works better empirically. More precisely, our inverse dynamic model which predicts at using 2k ` 1 transitions, including the current transition pst, st`1q, the previous k transitions that leads to st, and the next k transitions starting from st`1. We call k the transition size parameter. Let st´k:t`k`1 denote the sequence sminp0,t´kq, . . . , st, st`1, . . . , smaxp|τ |,t`k`1qq. Specifically, we model the distribution of at as a multivariate Gaussian distribution with a diagonal covariance matrix: at „ N ` µθpst´k:t`k`1q, Σθpst´k:t`k`1q ˘ . (1) Let ϕθpat|st´k:t`k`1q be the probability density function of N ` µθpst´k:t`k`1q, Σθpst´k:t`k`1q ˘ . Given the labelled trajectories Tlabelled, we minimize the negative log-likelihood loss Eat,st´k:t`k`1„Tlabelled r´ log ϕθpat|st´k:t`k`1qs. Note that the standard IDM which predicts at from pst, st`1q under the ℓ2 loss, is a special case subsumed by our model: it is equivalent to the case k “ 0 and the diagonal entries of Σθ (i.e., the variances of each action dimension) are all the same. Choosing k ą 0 allows us to account for non-Markovian behaviour policies and partially observable MDP (POMDP), see Appendix E.1. For all the experiments in this paper, we use k “ 1. We ablate this design choice in Section 4.3. 4 EXPERIMENTS Our experiments aim to answer three primary questions: 1. Can SS-ORL closely track or even match the performance for fully supervised offline reinforcement learning, when only a small subset of trajectories are labelled? 2. How does the performance of SS-ORL vary as a function of the size and qualities of the labelled and unlabelled datasets? 3. Do different offline RL methods respond differently under varying setups of data size and qualities? To answer these questions, we focus on three Gym locomotion tasks hopper, walker, and halfcheetah, and we use the v2 medium-expert, medium and medium-replay datasets1 from the D4RL benchmark (Fu et al., 2020). We address the first question in Section 4.1 and the other two in Section 4.2, respectively. Finally, we discuss the design choices for SS-ORL in Section 4.3. 4.1 BENCHMARKING Data Setup For a given offline dataset, we subsample 10% of the total trajectories from the dataset, whose returns are from the bottom q%, 10 ď q ď 100. We keep the actions for those trajectories, and discard the actions for the rest. We call this setup the coupled setup, since Plabelled and Punlabelled will change simultaneously when we vary the value of q. When q “ 100, we are uniformly sampling the trajectories and we have Plabelled “ Punlabelled. Under this setup, we always have 10% trajectories labelled and 90% unlabelled, and the total amount of data used later for training the offline RL agent is the original offline dataset size. This allows us to easily compare our results with results under the standard, fully labelled setup. In Section 4.2, we shall decouple the distributions Plabelled and Punlabelled for a thorough understanding of their individual influences. Inverse Dynamic Model We train an IDM as described in Section 3 with parameter k “ 1. In other words, the IDM predicts at using 3 consecutive transitions: pst´1, st, st`1, st`2q. The mean and the covariance matrix are predicted by two independent multilayer perceptrons (MLPs), each contains two hidden layers and 1024 hidden units per layer. To prevent overfitting, we randomly sample 10% of the labelled trajectories as the validation set, and use the IDM that yields the best validation error within 100k training iterations. Offline RL Methods We instantiate Algorithm 1 with DT, CQL and TD3BC (Fujimoto & Gu, 2021) and test their performances. Among these methods, DT is a recently proposed conditional behavior cloning (BC) method that uses sequence modeling tools to model the trajectories; CQL is a representative value-based offline RL method; and TD3BC is a hybrid method which adds a BC term to regularize the Q-learning updates. We refer to those instantiations as SS-DT, SS-CQL and SS-TD3BC, respectively. We defer the implementation details to Appendix A. Results We compare the performance of those SS-ORL agents with corresponding baseline and oracle agents. The baseline agents are trained on the labelled trajectories only, and the oracle agents are trained on the full offline dataset with action labels. Intuitively, the performances of the baseline and the oracle agents can be considered as the (estimated) lower and upper bounds for the performance of the SS-ORL agents. For each method, we train 5 instances under different seeds, and 1Due to the space limit, the results on medium and medium-replay datasets are deferred to Appendix C. for each instance we run 30 evaluation trajectories. We report the average return and the standard deviation after 200k iterations. Figure 4.1 plots the results on medium-expert datasets. For all the three environments and all the three offline RL methods, the SS-ORL agents improve upon the baselines. Remarkably, even when the labelled data quality is low, the SS-ORL agents are able to obtain decent returns. For example, when q “ 10, i.e., the labelled trajectories are the bottom 10% trajectories, the average return obtained by SS-TD3BC is 0.93, 0.91 and 0.79 for hopper, walker and halfcheetah. On average, this is 87.4% relative to the oracle performance (1.02, 1.1 and 0.89). As the value q increases, the labelled data quality increases and the distributions Plabelled and Punlabelled are getting closer. The performance of the SS-ORL agents also keeps increasing and finally matches the performance of the oracle agents. Similar observations can be found in the results of medium and medium-replay datasets, see Figure C.1 and C.2. We found relatively suboptimal results for DT on halfcheetah in all cases, consistent with prior results in Zheng et al. (2022). 4.2 ABLATION STUDY We conduct experiments to understand the semi-supervised approach from the perspective of both datasets and learning algorithms. For a systematic study, we depart from the coupled setup in Section 4.1 and consider a decoupling of the labelled data distributions Plabelled and the unlabelled data distribution Punlabelled. We first vary the quality of the labelled and unlabelled trajectories, and examine how the final performance of those SS-ORL agents changes. Next, we vary the size of the labelled and unlabelled trajectories and investigate their influences. To understand how the value-based methods and the BC methods will potentially react differently under these data setups. we report the results of SS-CQL and SS-DT for the aforementioned setups. Last, we ablate the design choice of the transition size k for the proposed IDM. For all the experiments we present, the results are aggregated over 5 instances with different seeds. Quality of Unlabelled Data We divide the trajectories from the hopper-medium-expert dataset into 3 groups, which consist of trajectories whose returns are the bottom 0% to 33%, 33% to 67%, and 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | labelled~High baseline (labeled data only) unlabelled~Low oracle unlabelled~Med oracle unlabelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | labelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | labelled~Low 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | labelled~High baseline (labeled data only) unlabelled~Low oracle unlabelled~Med oracle unlabelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | labelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | labelled~Low (a) We fix the labelled data quality and vary the unlabelled data quality. When the labelled data quality is low or moderate, SS-ORL can significantly improve the performance upon the baselines by utilizing high quality unlabelled data. 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | unlabelled~High labelled~Low oracle labelled~Med oracle labelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | unlabelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | unlabelled~Low 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | unlabelled~High labelled~Low oracle labelled~Med oracle labelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | unlabelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | unlabelled~Low (b) We fix the unlabelled data quality and vary the labelled data quality. The performance of SS-ORL improves as the labelled data quality increases. Figure 4.2: The return (average and standard deviation) of SS-DT and SS-CQL agents trained on the hopper-medium-expert dataset, when the qualities of the labelled and unlabelled data vary. Both the sizes of the labelled and unlabelled data are 10% of the offline dataset size. 67% to 100%, respectively. We refer to them Low, Medium, and High quality groups. In particular, the High group contains trajectories generated by the expert agents (Fu et al., 2020). As before, we report the performance of DT and CQL agents trained on the labelled data only as the baselines. We also report the results under the oracle mode, where we fill in the unlabelled trajectories with the true actions, and combine them with the labelled trajectories to train offline RL agents. We first report the performance of SS-DT when the labelled data is sampled from the High group, and the unlabelled data are sampled from Low, Med, and High groups, respectively. Both the size of the labelled and unlabelled trajectories are 10% of the total offline dataset size. The top left panel of Figure 4.2a plots the results. Clearly, when the labelled data quality is high, training on the labelled data only is sufficient to achieve the expert performance, and adding unlabelled data does not bring extra benefits. We repeat the same experiment when the labelled data is sampled from Medium and Low, see the top middle and top right panels of Figure 4.2a. For those cases, adding unlabelled data with the higher or the same quality2 improves the performance, whereas the lower quality unlabelled 2When the labelled and unlabelled data are sampled from the same quality group, we are simply adding more data from the same distribution. data is not significantly helpful. The performance of SS-CQL follows the same trends, see the bottom panels of Figure 4.2a. To summarize, the experiments provide strong evidence that when the labelled data is of low or moderate quality, SS-ORL is capable to exploit the high quality unlabelled data and remarkably boosts the performance compared with the baselines. The resulting performance is close to that of the oracle agent, and is often optimal (at least 1) or near-optimal (close to 1). Quality of Labelled Data Similarly, we fixed the unlabelled data quality and vary the quality of the labelled data. Figure 4.2b shows the results. For both SS-DT and SS-CQL, increasing the labelled data quality raises the performance for all the cases. Size of Labelled Data We train SS-ORL agents where we fix the number of unlabelled trajectories to be 10% of the total number of offline trajectories, and vary the number of labelled trajectories as 10%, 25%, and 50% of the total size. Similar to the above experiments, we consider four data quality setups, where the labelled and unlabelled trajectories are sampled from the bottom half (denoted by L) and top half (denoted by T) trajectories, respectively. We consider both hopper-medium-expert and walker-medium-expert datasets. To take account of different environments and data setups, we report the 95% stratified bootstrap confidence intervals (CIs) of the interquartile mean3 of the return for all these cases and training instances (Agarwal et al., 2021). We use 50000 bootstrap replications to generate the CIs. Compared with some other statistics like the mean or the median, the IQM is robust to outliers and also a good representative of the overall performance. The stratified bootstrapping is a handy tool to obtain CIs with descent coverage rate, even if one only have a small number of training instances per setup. We refer the readers to Agarwal et al. (2021) for the complete introduction. Figure 4.3a plots the confidence intervals when we consider all four quality setups, or when the labelled data quality is low or high, respectively. We found that SS-DT and SS-CQL respond slightly differently. Overall, SS-CQL is almost immune to changes in the size of the labelled data, as is SS-DT when the labelled data quality is high. However, SS-DT’s performance moderately increases as the labelled size grows when the labelled data quality is low. More detailed results, including the plots of the evaluation curves, CIs of the mean and the median, can be found in Appendix D. Size of Unlabelled Data As before, we vary the number of labelled data size with the unlabelled data size fixed, and report the 95% stratified bootstrap CIs in Figure 4.3b. Similarly, SS-CQL is almost insensitive whereas SS-DT is sensitive when the labelled quality is low. 3The interquartile mean of a list of sorted numbers is the mean of the middle 50% numbers. Value-based vs. Conditional BC As discussed above, SS-CQL is insensitive to the data size changes, whereas SS-DT is more responsive when the labelled data quality is low. Regarding the data quality, we are mostly interested in the scenarios where the labelled data quality is low or moderate, see the red and blue curves of Figure 4.2b. In that regime, if the unlabelled data quality is high (the left column), the distribution shift from the labelled data to the unlabelled data is challenging to handle, and the proxy-actions predicted by the IDM will be less accurate. There, the absolute performance of SS-CQL is slightly better than SS-DT, with smaller performance gaps compared to the oracle agents. If the unlabelled data quality is moderate or low (the middle and right columns), SS-CQL clearly outperforms SS-DT. Both observations suggest that SS-CQL is less sensitive to the action quality. 4.3 DESIGN CHOICES Transition Size k of the IDM We train SS-TD3BC, SS-CQL and SS-DT agents on the hopper-medium-expert dataset, under the coupled setup as in Section 4.1. We consider 6 different values of q: 10, 30, 50, 70, 90 and 100. For all the 18 setups (3 SS-ORL agents and 6 different q values), we train the agents using the multi-transition IDM with k “ 0, 1, 2 respectively. As in the previous section, Figure E.1 plots the 95% stratified bootstrapped CIs for the IQM return across all the setups and training instances, which are generated by 50000 bootstrap replications. The results favors the choice k “ 1. See Appendix E for more experiment details, such as the average return for each setup and the CIs for the return mean. Data Augmentation Strategy As discussed in Section 3.2, we consider variants of SS-TD3BC and SS-DT using uncertainty based data augmentation. Following Lakshminarayanan et al. (2017), we train an ensemble of 3 independent IDMs on Tlabelled. We generate the proxy actions for the unlabelled trajectories using the combined model, and also estimate the predictive uncertainties. We then only add proxy-labelled data whose uncertainties are below p% to the final RL training dataset. Specifically, we test 4 values of p: 25, 50, 75 and 95. We compare the results with standard SS-TD3BC and SS-DT where all the proxy-labelled data are added into the final RL training dataset. Again, we consider both the hopper-medium-expert and walker-medium-expert datasets and use the coupled setup under with 4 different q values: 10, 30, 68, 75 for hopper-medium-expert and 10, 30, 54, 60 for walker-medium-expert. Figure 4.5 plots the 95% stratified bootstrap CIs of the IQM return across all the setups. Adding all the proxy-labelled data without filtering outperforms uncertainty based data augmentation; see Appendix F for more details. Intuitively, to make use of the unlabelled data, most of the SSL pipelines would assume Plabelled and Punlabelled are similar or even the same (Chapelle et al., 2006). This is not the case in our setup, where Plabelled only generates low return trajectories, and all the high return ones come from Punlabelled. It remains an open question if self-training with the uncertainty based selection rule can help us generalize to high return trajectories. 5 DISCUSSION We proposed a novel setup for offline RL where the trajectories do not have all of the action information, for which we have introduced a semi-supervised meta-algorithmic pipeline. Our experiments identified key properties that enable the agents to learn from unlabelled data and show that near-optimal learning can be done with only 10% of the actions labelled for low-to-moderate quality trajectories. It would be interesting to study other heterogeneous data setups for offline RL in the future, including reward-free or pure state-only settings. This work is a step towards a broader goal of empowering robotic systems with the ability to extract meaningful knowledge from copious and ever-growing amounts of unlabelled demonstration data. Beyond simply not having the action labels, many trajectories may be from different robotic systems or tasks and therefore are not directly transferable to the system and task at hand. As we continue building robotic systems to leverage these forms of auxiliary knowledge, we expect that weakly-supervised learning paradigms such as the one explored in this work will be useful. A EXPERIMENT DETAILS In this section, we provide more details about our experiments. For all the offline RL methods we consider, we use our own implementations adopted from the following codebases: DT https://github.com/facebookresearch/online-dt TD3BC https://github.com/sfujim/TD3_BC CQL https://github.com/scottemmons/youngs-cql We use the stochastic DT proposed by Zheng et al. (2022). For offline RL, its performance is similar to the deterministic DT (Chen et al., 2021). The policy parameter is optimized by the LAMB optimizer (You et al., 2019) with ε “ 10´8. The log-temperature parameter is optimized by the Adam optimzier (Kingma & Ba, 2014). The architecture and other hyperparameters are listed in Tabel A.1. For TD3BC, we optimize both the critic and actor parameters by the Adam optimizer. The complete hyperparameters are listed in Table A.2. For CQL, we also use the Adam optimizer to optimize the critic, actor and the log-temperature parameters. The architecture of critic and actor networks and the other hyperparameters are listed in Table A.3. We use batch size 256 and context length 20 for DT, where each batch contains 5120 states. Correspondingly, we use batch size 5120 for CQL and TD3BC. B THE RETURN DISTRIBUTIONS OF THE D4RL DATASETS C ADDITIONAL EXPERIMENTS UNDER THE COUPLED SETUP We conduct experiments on the medium and medium-replay datasets of D4RL benchmark, using the same setup as in Section 4.1. Figure C.1 and C.2 reports the results. The general trend is as the same as that in Figure 4.1. We note that the results on the halfcheetah-medium dataset are less informative than the others. This is because the data distributions of halfcheetah-medium is very concentrated, similar to a Gaussian distribution with small variance, see Figure B.1. In such a case, varying the value of q does not drastically change the labelled data distribution. One may notice that for the hopper-medium-replay and walker-medium-replay datasets, SS-ORL does not catch up with the oracle as quickly as on the other datasets as q increases. Our intuition is that the return distributions of these two datasets concentrate on extremely low values, as shown in Figure B.1. In our experiments, the labelled trajectories for those two datasets have average return small than 0.1 even when q “ 70. In contrast, the return distributions of the other datasets concentrate on larger values. For the halfcheetah-medium-replay and all the medium and medium-expert datasets, increasing the value of q will greatly change the returns of labelled trajectories, see Table C.1. To demonstrate the performance of SS-ORL on dataset with a more wide return distribution, we consider a subsampled dataset for the walker environment generated as follows. 1. Combine the walker-medium-replay and walker-medium datasets. 2. Let Rmin and Rmax denote the minimum and maximum return in the dataset. We divide the trajectories into 40 bins, where the maximum returns within each bin are linear spaced between Rmin and Rmax. Let ni be the number trajectories in bin i. 3. We randomly sample 1000 trajectories. To sample a trajectory, we first first sample a bin i P r1, . . . , 40s with weights proportional to 1{ni, then sample a trajectory uniformly at random from the sampled bin. Figure C.3 plots the return distribution of the subsampled dataset. It is wide and has 3 modes. We run the same experiments as before on this subsampled dataset, and Figure C.4 plots the results. We can see that SS-ORL methods can catch up with the oracle agents even when q is small. D INFLUENCES OF THE LABELLED AND UNLABELLED DATA SIZE Figure D.1 plots the average return of SS-DT and SS-CQL when we fix the number of unlabelled trajectories and vary the number of labelled trajectories. We found that there is a bad seed for SS-CQL when both labelled and unlabelled trajectories are sampled from L and the size of labelled data is 10%, so that the result there the bottom right panel) exhibits large variance. Correspondingly, Figure D.2 plots the 95% stratified bootstrap CIs for the median, mean, interquartile mean, and the optimality gap of the return of SS-DT and SS-CQL. Similarly, Figure D.3 and D.4 plots the results when we vary the number of unlabelled trajectories, while the number of unlabelled ones is fixed. E TRANSITION SIZE k FOR THE MULTI-TRANSITION INVERSE DYNAMIC MODEL E.1 THEORY Let β denote the behaviour policy. When k “ 0, the IDM is modeling Ppat|st`1, stq “ Ppat, st`1|stq Ppst`1|stq “ Ppst`1|at, stqβpat|stq Ppst`1|stq . (2) For the cases where k ą 0, w.l.o.g, we assume k “ 1. The IDM is modeling Ppat|st`2, st`1, st, st´1q “ Ppat, st`2, . . . , st´1q Ppst`2, . . . , st´1q “ Ppst`1|at, st, st`2, st´1qPpat|st`2, st, st´1q{Ppst`1|st`2, st, st´1q “ Ppst`1|at, stqβpat|st, st´1q{Ppst`1|st, st´1q, (3) where in the last line we used the fact that the policy β can only generate actions based on previous states, the Markovian transition property Ppst`1|at, st, st`2, st´1q “ Ppst`1|at, stq, and also the induced property Ppst`1|st, st´1q “ Ppst`1|st`2, st, st´1q. If the behaviour policy β is Markovian, we have that βpat|stq “ βpat|st, st´1q, and as a consequence Ppat|st`1, stq “ Ppat|st`2, st`1, st, st´1q ¨ Cpst`1, st, st´1q, (4) where C “ Ppst`1|stqPpst`1|st,st´1q is independent of the action at. Therefore, the probabilities that the IDMs with k “ 0 and k “ 1 are modeling, are equivalent up to a state-only dependent scaling. The cases where k ě 2 can be derived analogously. In practice, the offline dataset might contain trajectories generated by multiple behaviour policies and it is unknown if any of them is Markovian. Therefore, choosing k ą 0 allows us to take into account past information before timestep t. For future, we do not need anything beyond st`1 for a MDP, but our formulation is general purpose to account for POMDP as well, where both past and future partial observations might be needed to infer the action at. To summarize, choosing k ě 0 is more general and have been shown to be favorable in the empirical experiments presented in next section. E.2 EMPIRICAL EXPERIMENTS We train SS-TD3BC, SS-CQL and SS-DT with 3 IDM transition size: k = 0, 1 and 2 on the hopper-medium-expert dataset. We use the coupled setup described in Section 4.1, with 6 different values of q. Table E.1 reports the performance of those agents for each case. In addition to the interquartile mean considered in Section 4.3, we also consider 3 other statistics of the return across all the setups: the mean, the median and the optimality gap. Figure 4.3 plots the 95% stratified bootstrap confidence intervals for all the four statistics, genereated by 50000 bootstrap replications. q “ 10 q “ 30 q “ 50 q “ 70 q “ 90 q “ 100 Average k “ 0 0.81 ˘ 0.12 0.89 ˘ 0.05 0.93 ˘ 0.05 1.05 ˘ 0.04 1.03 ˘ 0.06 1.01 ˘ 0.04 0.95 SS-TD3BC k “ 1 0.93 ˘ 0.07 1.01 ˘ 0.05 0.86 ˘ 0.06 0.98 ˘ 0.06 1.03 ˘ 0.06 1.03 ˘ 0.04 0.98 k “ 2 0.80 ˘ 0.12 0.91 ˘ 0.03 0.93 ˘ 0.05 0.95 ˘ 0.08 1.01 ˘ 0.06 1.04 ˘ 0.02 0.94 k “ 0 0.69 ˘ 0.17 0.69 ˘ 0.15 0.88 ˘ 0.15 1.04 ˘ 0.04 1.11 ˘ 0.01 1.10 ˘ 0.03 0.92 SS-CQL k “ 1 0.69 ˘ 0.15 0.90 ˘ 0.05 0.89 ˘ 0.13 1.03 ˘ 0.07 1.07 ˘ 0.08 1.11 ˘ 0.01 0.95 k “ 2 0.90 ˘ 0.11 0.90 ˘ 0.09 0.86 ˘ 0.11 1.08 ˘ 0.05 1.10 ˘ 0.01 1.11 ˘ 0.01 0.99 k “ 0 0.72 ˘ 0.17 0.75 ˘ 0.20 0.90 ˘ 0.14 1.06 ˘ 0.04 1.11 ˘ 0.00 1.11 ˘ 0.01 0.94 SS-DT k “ 1 0.69 ˘ 0.20 0.94 ˘ 0.07 0.99 ˘ 0.05 1.05 ˘ 0.04 1.11 ˘ 0.00 1.11 ˘ 0.00 0.98 k “ 2 0.78 ˘ 0.07 0.89 ˘ 0.08 0.85 ˘ 0.15 1.05 ˘ 0.02 1.11 ˘ 0.00 1.11 ˘ 0.00 0.97 Table E.1: The return (average and standard deviation) of SS-ORL agents trained on the hoppermedium-expert dataset under the coupled setup, where the IDM is trained with 2 different values of k: 0, 1 and 2. Results aggregated over 5 training instances. 0.92 0.96 1.00 1.04 k=0 k=1 k=2 Median 0.92 0.96 1.00 1.04 IQM 0.92 0.96 1.00 1.04 Mean 0.06 0.09 0.12 0.15 Optimality Gap Normalized Return Figure E.1: The 95% stratified bootstrap CIs of four statistics (the median, mean, interquartile mean, and the optimality gap) of the returns obtained by SS-ORL agents, with different values of k. F DATA AUGMENTATION STRATEGY Following Lakshminarayanan et al. (2017), we train an ensemble of 3 independent IDMs on Tlabelled. Each individual IDM models the action as a diagonal Gaussian distribution (see Equation (1)) N pµi,Σiq, i “ 1, 2, 3. The ensemble models the action using a equally weighted Gaussian mixture of these three distributions. We predict the action by the mixture’s mean and predict the uncertainty by the mixture’s variance; both can be written in close form. We conduct experiments for SS-DT and SS-TD3BC, where we only add proxy-labelled data whose uncertainties are below p% to the final RL training dataset. Specifically, we test 4 values of p: 25, 50, 75 and 95.:w We compare the results with standard SS-DT and SS-TD3BC where all the proxy-labelled data are added into the final RL training dataset. We consider both the hopper-medium-expert and walker-medium-expert datasets. We use the coupled setup described in Section 4.1, where we consider 4 different values of q: 10, 30, 68, 75 for hoppermedium-expert and 10, 30, 54, 60 for walker-medium-expert. Table F.1 reports the average return and standard deviation obtained by SS-DT and SS-TD3BC under different data augmentation strategies, when trained on the hopper-medium-expert dataset. The results on the walker-medium-expert dataset are reported in Table F.2. It is easy to see that uncertainty based data augmentation degrades the performance, compared with adding all the proxylabelled data without filtering. Overall, the latter performs consistently well across different setups. Figure F.1 plots the 95% stratified bootstrap CIs for this experiments. All the statistics favor the no filtering strategy. hopper-medium-expert q “ 10 q “ 30 q “ 68 q “ 75 Average below 25% 0.60 ˘ 0.03 0.62 ˘ 0.02 0.71 ˘ 0.04 0.86 ˘ 0.02 0.70 below 50% 0.62 ˘ 0.02 0.66 ˘ 0.06 0.76 ˘ 0.04 0.86 ˘ 0.09 0.72 SS-TD3BC below 75% 0.70 ˘ 0.06 0.74 ˘ 0.07 0.84 ˘ 0.06 0.94 ˘ 0.08 0.80 below 95% 0.82 ˘ 0.05 0.82 ˘ 0.09 0.90 ˘ 0.09 0.96 ˘ 0.06 0.88 no filtering 0.80 ˘ 0.07 0.92 ˘ 0.04 0.91 ˘ 0.06 0.94 ˘ 0.10 0.89 below 25% 0.61 ˘ 0.12 0.62 ˘ 0.05 0.70 ˘ 0.01 0.95 ˘ 0.13 0.72 below 50% 0.60 ˘ 0.14 0.65 ˘ 0.04 0.69 ˘ 0.02 1.04 ˘ 0.07 0.75 SS-DT below 75% 0.42 ˘ 0.04 0.63 ˘ 0.15 0.75 ˘ 0.06 1.04 ˘ 0.04 0.71 below 95% 0.51 ˘ 0.16 0.82 ˘ 0.12 0.85 ˘ 0.05 1.06 ˘ 0.03 0.81 no filtering 0.47 ˘ 0.14 0.71 ˘ 0.14 0.83 ˘ 0.07 1.06 ˘ 0.03 0.77 Table F.1: The return (average and standard deviation) of SS-ORL agents trained on the hoppermedium-expert dataset under the coupled setup, using different data augmentation strategies. Results aggregated over 5 training instances. walker-medium-expert q “ 10 q “ 30 q “ 54 q “ 60 Average below 25% 0.82 ˘ 0.02 0.82 ˘ 0.01 0.80 ˘ 0.06 1.04 ˘ 0.06 0.87 below 50% 0.83 ˘ 0.03 0.84 ˘ 0.02 0.84 ˘ 0.01 1.02 ˘ 0.09 0.88 SS-TD3BC below 75% 0.74 ˘ 0.11 0.86 ˘ 0.01 0.85 ˘ 0.01 1.04 ˘ 0.07 0.87 below 95% 0.86 ˘ 0.04 0.88 ˘ 0.01 0.87 ˘ 0.01 1.10 ˘ 0.01 0.93 no filtering 0.86 ˘ 0.05 0.86 ˘ 0.03 0.87 ˘ 0.01 1.10 ˘ 0.01 0.92 below 25% 0.69 ˘ 0.04 0.74 ˘ 0.02 0.70 ˘ 0.03 0.84 ˘ 0.17 0.74 below 50% 0.67 ˘ 0.03 0.72 ˘ 0.02 0.73 ˘ 0.03 0.95 ˘ 0.15 0.77 SS-DT below 75% 0.71 ˘ 0.03 0.60 ˘ 0.13 0.73 ˘ 0.03 0.95 ˘ 0.14 0.74 below 95% 0.73 ˘ 0.08 0.52 ˘ 0.11 0.58 ˘ 0.15 0.98 ˘ 0.10 0.70 no filtering 0.79 ˘ 0.05 0.55 ˘ 0.13 0.69 ˘ 0.08 0.91 ˘ 0.15 0.74 Table F.2: The return (average and standard deviation) of SS-ORL agents trained on the walkermedium-expert dataset under the coupled setup, using different data augmentation strategies. Results aggregated over 5 training instances. 0.75 0.80 0.85 below 25% below 50% below 75% below 95% no filtering Median 0.75 0.80 0.85 Mean 0.76 0.80 0.84 0.88 IQM 0.16 0.20 0.24 Optimality Gap Normalized Return Figure F.1: The 95% stratified bootstrap CIs of four statistics (the median, mean, interquartile mean, and the optimality gap) of the returns obtained by SS-ORL agents, when combined with different data augmentation strategies. G COMPARISON WITH GATO UNDER THE COUPLED SETUP Inspired the multi-task and multi-modal generalist agent proposed by Reed et al. (2022), we consider a GATO type of variant of DT that can incorporate the unlabelled data into policy training. GATO is trained on the labelled and unlabelled data simultaneously. The implementation details are: • We form the same input sequence as DT, where we fill in zeros for the missing actions for unlabelled trajectories. • For the labelled trajectories, GATO predicts the actions, states and rewards; for the unlabelled ones, GATO only predicts the states and rewards. • We use the stochastic policy as in online decision transformer (Zheng et al., 2022) to predict the actions. • We use deterministic predictors for the states and rewards, which are single linear layers built on top of the Transformer outputs. Let gt “ ř|τ |rt1 t1“t be the return-to-go of a trajectory τ at timestep t. Let H Plabelled θ denotes the policy entropy included on the labelled data distribution. For simplicity, we assume the context length of GATO is 1. We refer the readers to Zheng et al. (2022) for the formulation with a general context length and more details. Equation (5) shows the training objective of GATO. min θ Epat,st,rt,gtq„Plabelled ␣ ´ log πpat|st, gt, θq ` λs}st ´ pstpθq}22 ` λr}rt ´ prtpθq}22 ( ` Epst,rt,gtq„Punlabelled ␣ λs}st ´ pstpθq}22 ` λr}rt ´ prtpθq}22 ( s.t. HPlabelledθ ra|s, gs ě ν (5) 20 40 60 80 100 q (labelled data quality param) 0.4 0.6 0.8 1.0 no rm al ize d re tu rn hopper-medium-expert SS-DT DT baseline GATO DT oracle 20 40 60 80 100 q (labelled data quality param) 0.4 0.6 0.8 1.0 no rm al ize d re tu rn hopper-medium-expert SS-CQL SS-DT SS-TD3BC GATO Figure G.1: (The performance of SS-ORL and GATO on the hopper-medium-expert dataset. For GATO, we use λs “ 0.01 and λr “ 1.0. (L) SS-DT significantly outperforms GATO, where GATO only slightly improves upon the baseline. (R) SS-CQL, SS-DT and SS-TD3BC all outperform GATO. 0.60 0.64 0.68 s = 1.0, r = 1.0 s = 1.0, r = 0.1 s = 1.0, r = 0.01 s = 1.0, r = 0.001 s = 0.1, r = 1.0 s = 0.1, r = 0.1 s = 0.1, r = 0.01 s = 0.1, r = 0.001 s = 0.01, r = 1.0 s = 0.01, r = 0.1 s = 0.01, r = 0.01 s = 0.01, r = 0.001 s = 0.001, r = 1.0 s = 0.001, r = 0.1 s = 0.001, r = 0.01 s = 0.001, r = 0.001 IQM Normalized Return Figure G.2: The 95% stratified bootstrap CIs of four statistics (the median, mean, interquartile mean, and the optimality gap) of the returns obtained by GATO agents, with different combinations of regularization parameters. The constant ν, λs and λr are prefixed hyper-parameters, where ν is the target policy entropy, and λs and λr are regularization parameters used to balance the losses for actions, states, and rewards. We use ν “ ´dimpAq as for DT (see Appendix A). To choose the regularization parameters λs and λr for GATO, we test 16 combinations where λs and λr are 1.0, 0.1, 0.01 and 0.001 respectively. We run experiments as in Section 4.1 for q “ 10, 30, 50, 70, 90, 100, and compute the confidence intervals for the aggregated results. Figure G.2 shows that λs “ 0.01 and λr “ 0.1 yield the best performance. Figure G.1 compares the performance of GATO (with λs “ 0.01 and λr “ 0.1) and SS-ORL agents. It is clear that SS-ORL agents outperform GATO. H PERFORMANCE GAP OF SS-ORL AGENTS For a chosen offline RL method, the relative performance gap between the corresponding SS-ORL and oracle agents illustrates how sensitive this offline RL is to missing actions: Oracle-ORL ´ SS-ORL Oracle-ORL . (6) We consider the coupled setup as in Section 4.1. For each of the 9 datasets (hopper,walker,halfcheetah with medium-expert, medium, and medium-replay datasets), we compute the relative performance gap for SS-CQL, SS-DT and SS-TD3BC, trained with 6 different values of q: 10, 30, 50, 70, 90 and 100. Table H.1 reports the aggregate results over 5 seeds. On average, SS-CQL and SS-TD3BC have smaller relative performance gap, suggesting that CQL and TD3BC are less sensitive to the missing actions. method hopper-me walker2d-me hc-me hopper-m walker2d-m hc-m hopper-mr walker2d-mr hc-mr Average SS-CQL 0.147 0.114 0.062 0.078 0.077 0.003 0.388 0.379 0.106 0.150 SS-TD3BC 0.046 0.094 0.104 0 0.065 0.001 0.327 0.412 0.057 0.123 SS-DT 0.119 0.167 0.0002 0.016 0.039 0.003 0.399 0.554 0.109 0.156 Table H.1: The relative performance gap of SS-CQL, SS-TD3BC, and SS-DT.
1. What is the focus and contribution of the paper regarding semi-supervised learning in offline RL? 2. What are the strengths and weaknesses of the proposed method, particularly concerning its technical novelty and design choices? 3. Do you have any concerns about the extension of IDM in the fully unsupervised case? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any questions regarding the paper that the reviewer did not address explicitly?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces a method for semi-supervised learning in the offline RL setting where the unlabelled part of the dataset consists of action-free state trajectories and the labelled part consists of the full trajectories. They use an inverse dynamics model to learn actions that give rise to state transitions and use the learned model to inject labels for unlabelled data and perform offline learning using classic model-free offline RL algorithms such as CQL. Strengths And Weaknesses Weaknesses: My main concern is with the technical contribution of this paper. Considering the inverse dynamics model as the crucial technical novelty, I am not sure the method given in Eq. 1 is the best, and at the very least, some further evaluations and arguments for its design choice is needed. For one, the covariate matrix with k>0 would seem to break the markov property of the RL setting. Furthermore, it is not justified why this choice was made beyond the empirical validation, which is not enough either in my opinion. I am not convinced that this extension of IDM significantly contributes to the overall goal of taking advantage of semi-supervision, as compared to the fully unsupervised case. In particular, it would have been interesting to see whether semi-supervision in this regime would help when the data split is within trajectories and not between trajectories. Along the lines of the above, I think an analysis of why only a single labelling round was used instead of the conventional self-training paradigm of retraining per round could have been included to make the paper stronger. In particular, I assume it could be possible to provide a more detailed analysis of how well the inverse dynamics model learns when compared to the ground truth data on the log-likelihood of the multivariate Gaussians used to estimate them since you have access to those data labels. As it stands, this paper's main contributions are a careful study of the different considerations one should take when doing semi-supervised offline RL. It has a good experimental validation of how performant value-based and BC methods. However, it does not make enough technical contribution to take advantage of the specific literature in the semi-supervised learning setting or does not justify its design choices well. Clarity, Quality, Novelty And Reproducibility This paper is clearly written, with some minor typos throughout. Overall, my main concern is with the novelty and quality of the proposed algorithm. The results should be reproducible.
ICLR
Title Semi-Supervised Offline Reinforcement Learning with Action-Free Trajectories Abstract Natural agents can effectively learn from multiple data sources that differ in size, quality, and types of measurements. We study this heterogeneity in the context of offline reinforcement learning (RL) by introducing a new, practically motivated semi-supervised setting. Here, an agent has access to two sets of trajectories: labelled trajectories containing state, action, reward triplets at every timestep, along with unlabelled trajectories that contain only state and reward information. For this setting, we develop a simple meta-algorithmic pipeline that learns an inversedynamics model on the labelled data to obtain proxy-labels for the unlabelled data, followed by the use of any offline RL algorithm on the true and proxy-labelled trajectories. Empirically, we find this simple pipeline to be highly successful — on several D4RL benchmarks (Fu et al., 2020), certain offline RL algorithms can match the performance of variants trained on a fully labelled dataset even when we label only 10% trajectories from the low return regime. Finally, we perform a large-scale controlled empirical study investigating the interplay of data-centric properties of the labelled and unlabelled datasets, with algorithmic design choices (e.g., inverse dynamics, offline RL algorithm) to identify general trends and best practices for training RL agents on semi-supervised offline datasets. 1 INTRODUCTION One of the key challenges with deploying reinforcement learning (RL) agents is its prohibitive sample complexity for real-world applications. Offline reinforcement learning (RL) can significantly reduce the sample complexity by exploiting logged demonstrations from auxiliary data sources (Levine et al., 2020). However, contrary to curated benchmarks in use today, the nature of offline demonstrations in the real world can be highly varied. For example, the demonstrations could be misaligned due to frequency mismatch (Burns et al., 2022), use of different sensors, actuators, or dynamics (Reed et al., 2022; Lee et al., 2022), or lacking partial state (Ghosh et al., 2022; Rafailov et al., 2021; Mazoure et al., 2021), or reward information (Yu et al., 2022). Successful offline RL in the real world requires embracing these heterogeneous aspects for maximal data efficiency, similar to learning in humans. In this work, we propose a new semi-supervised setup for offline RL. Standard offline RL assumes trajectories to be sequences of observations, actions, and rewards. However, many data sources, such as videos or third-person demonstrations lack direct access to actions. Hence, we propose a semi-supervised setup, where an agent’s offline dataset also consists of action-unlabelled trajectories in addition to the aforementioned (action-labelled) trajectories. Standard offline RL algorithms, such as Conservative Q Learning (CQL; Kumar et al. (2020)) or Decision Transformer (DT; Chen et al. (2021)), cannot directly operate on such unlabelled trajectories. At the same time, naively throwing out the unlabelled trajectories can be wasteful, especially when they have high returns. Our goal in this work is to enable compute and data efficient learning with additional action-unlabelled trajectory logs. Unlike traditional semi-supervised learning, our setup has a few key differences. First, we do not assume that the distribution of the labelled and unlabelled trajectories are necessarily identical. In realistic scenarios, we expect these to be different with unlabelled data having higher returns than labelled data e.g., videos of a human professional are easier to obtain than installing actuators for continuous control tasks. We replicate such varied data quality setups in some of our experiments; Figure 1.1 shows an illustration of the difference in returns between the labelled and unlabelled dataset splits for the hopper-medium-expert D4RL dataset. Second, our end goal goes beyond labeling the actions in the unlabelled trajectories, but rather we intend to use the unlabelled data for learning a downstream policy that is better than the behavioral policies used for generating the offline datasets. Hence, there are two kinds of generalization challenges: generalizing from the labelled to the unlabelled data distribution and then going beyond the offline data distributions to get closer to the expert distribution. Regular offline RL is concerned only with the latter. Finally, we are mainly interested in the case where a significant majority of the trajectories in the offline dataset are unlabelled. One motivating example for this setup is learning from videos or third-person demos. There are tremendous amounts of internet videos that can be potentially used to train RL agents, yet they are without action labels and are of varying quality. Our paper seeks to answer the following questions: 1. How can we utilize the unlabelled data for improving the performance of offline RL algorithms? 2. How does our performance vary as a function of data-centric properties, such as the size and return distributions of labelled and unlabelled datasets? 3. How do offline RL algorithms compare in this setup? To answer these questions, we propose a meta-algorithmic pipeline to train policies in the semisupervised setup described above. We call our pipeline Semi-Supervised Offline Reinforcement Learning (SS-ORL). SS-ORL contains three simple and scalable steps: (1) train a multi-transition inverse dynamics model on labelled data, which predicts actions based on transition sequences, (2) fill in proxy-actions for unlabelled data, and finally (3) train an offline RL agent on the combined dataset. Empirically, we instantiate SS-ORL with CQL (Kumar et al., 2020), DT (Chen et al., 2021), and TD3BC (Fujimoto & Gu, 2021) as the underlying offline RL algorithms respectively, and conduct experiments on the D4RL datasets (Fu et al., 2020). We highlight a few predominant trends from our experimental findings below: 1. Given low-quality labelled data, SS-ORL agents can exploit unlabelled data that contains highquality trajectories and thus improve performance. The absolute performance of SS-ORL is close to or even matches that of the oracle agents, which have access to complete action information. 2. When the labelled data quality is high, utilizing unlabelled data does not bring significant benefits. 3. The choice of value vs. behavior cloning based methods can significantly affect performance in the semi-supervised setup. In our experiments, CQL and TD3BC are less sensitive to the missing actions compared to DT. They enjoy better absolute performance when the labelled data is of low quality, and their performance gap relative to the oracle agent is also smaller. See Appendix H for more details. 2 RELATED WORK Offline RL The goal of offline RL is to learn effective policies from fixed datasets which are generated by unknown behavior policies. There are two main categories of model-free offline RL methods: value-based methods and behavior cloning (BC) based methods. Value-based methods attempt to learn the value functions based on temporal difference (TD) updates. There is a line of work that aims to port existing off-policy value-based online RL methods to the offline setting, with various types of additional regularization components that encourage the learned policy to stay close to the behavior policy. Several representive techniques include specifically tailored policy parameterizations (Fujimoto et al., 2019; Ghasemipour et al., 2021), divergence-based regularization on the learned policy (Wu et al., 2019; Jaques et al., 2019; Kumar et al., 2019), and regularized value function estimation (Nachum et al., 2019; Kumar et al., 2020; Kostrikov et al., 2021a; Fujimoto & Gu, 2021; Kostrikov et al., 2021b). Recently, a growing body of work has tried to formulate offline RL as a supervised learning problem (Chen et al., 2021; Janner et al., 2021; Emmons et al., 2021). Compared with the value-based methods, these methods enjoy several appealing properties including algorithmic simplicity and training stability. Generally speaking, these approaches can be viewed as conditional behavior cloning methods (Bain & Sammut, 1995), where the conditioning parameters are related information such as goals or rewards. Similar to value-based methods, these can be extended to the online setup as well (Zheng et al., 2022) and demonstrate excellent performance in hybrid setups involving both offline data and online interactions. Semi-supervised Learning Semi-supervised learning (SSL) is a sub-area of machine learning that studies approaches to train predictors from a small amount of labelled data combined with a large amount of unlabelled data. In supervised learning, predictors only learn from labelled data. However, labelled training examples often require human annotation efforts and are thus hard to obtain, whereas unlabelled data can be comparatively easy to collect. The research on semi-supervised learning spans several decades. One of the oldest SSL techniques, self-training, was originally proposed in the 1960s (Fralick, 1967). There, a predictor is first trained on the labelled data. Then, at each training round, according to certain selection criteria such as model uncertainty, a portion of the unlabelled data is annotated by the predictor and added into the training set for the next round. We refer the readers to Zhu (2005); Chapelle et al. (2006); Ouali et al. (2020); Van Engelen & Hoos (2020) for comprehensive literature surveys. Imitation Learning from Observations There have been several works in imitation learning (IL) which do not assume access to the full set of actions, such as BCO (Torabi et al., 2018a), MoBILE (Kidambi et al., 2021), GAIfO (Torabi et al., 2018b) or third-person IL approaches (Stadie et al., 2017; Sharma et al., 2019). The recent work of Baker et al. (2022) also considered a setup where a small number of labelled actions are available in addition to a large unlabelled dataset. A key difference between our work and these is that the IL setup typically assumes that all trajectories are generated by an expert, unlike our offline setup. Further, some of these methods even permit reward-free interactions with the environment which is not possible in the offline setup. Learning from Videos Closely related to IL from observations, several works (Schmeckpeper et al., 2020b;a) consider training agents with human video demonstrations, which are without action annotations. Distinct from out setup, in those works, the offline observational data (videos) are from a different embodiment. Moreover, the agents can interact with the environment, and can even collect reward information sometimes. 3 SEMI-SUPERVISED OFFLINE REINFORCEMENT LEARNING Preliminaries We model our environment as a Markov decision process (MDP) (Bellman, 1957) denoted by xS,A, p, P,R, γy, where S is the state space, A is the action space, pps1q is the distribution of the initial state, P pst`1|st, atq is the transition probability distribution, Rpst, atq is the deterministic reward function, and γ is the discount factor. At each timestep t, the agent observes a state st P S and executes an action at P A. As a response, the environment moves the agent to the next state st`1 „ P p¨|st, atq, and also returns the agent a reward rt “ Rpst, atq. 3.1 PROPOSED SETUP We assume the agent has access to a static offline dataset Toffline. The dataset consists of trajectories collected by certain unknown policies, which are not necessarily optimal. Let τ denote a trajectory and |τ | denote its length. We assume that all the trajectories in Toffline contain complete rewards and states. However, only a small subset of them contain action labels, while most of the trajectories are missing actions. We are interested in learning a policy by leveraging the offline dataset without interacting with the environment. This setup is analogous to semi-supervised learning, where actions serve the role of labels. Hence, we also refer to the complete trajectories as labelled data (denoted by Tlabelled) and the action-free trajectories as unlabelled data (denoted by Tunlabelled). Further, we assume the labelled data are sampled from a distribution Plabelled and the unlabelled data are sampled from Punlabelled. In general, the two distributions can be different. Practically, one case we are particularly interested in is when Plabelled generates low-to-moderate quality trajectories, whereas Punlabelled generates trajectories of diverse qualities including ones with high returns. Our setup shares some similarities with state-only imitation learning (Ijspeert et al., 2002; Bentivegna et al., 2002; Torabi et al., 2019) in the use of action-unlabelled trajectories. However, there are also some key differences. In state-only IL, the unlabelled demonstrations are from the same distribution as the labelled demonstrations and correspond to a near-optimal expert policy. In our setting, both Algorithm 1: Semi-supervised offline RL (SS-ORL) 1 Input: trajectories Tlabelled and Tunlabelled, IDM transition size k, offline RL method ORL // train a stochastic multi-transition IDM using the labelled data 2 pθ Ð argminθ Eat,st´k:t`k`1„Tlabelled r´ log ϕθpat|st´k:t`k`1qs // fill in the proxy actions for the unlabelled data 3 Tproxy Ð ∅ 4 for each trajectory τ P Tunlabelled do 5 pat Ð mean of N ` µ pθpst´k:t`k`1q, Σpθpst´k:t`k`1q ˘ , t “ 1, . . . , |τ | 6 τproxy Ð τ with proxy actions tpatu|τ |t“1 filled in 7 Tproxy Ð Tproxy Ť tτproxyu // train an offline RL agent using the combined data 8 π Ð policy obtained by training ORL using dataset Tlabelled Ť Tproxy 9 Output: π Plabelled and Punlabelled can be different from each other and also from the expert policy. Further, many state-only imitation learning algorithms (e.g., Gupta et al. (2017); Torabi et al. (2018a;b); Liu et al. (2018); Sermanet et al. (2018)), similar to their original counterparts (e.g., Ho & Ermon (2016); Kim et al. (2020)), permit (reward-free) interactions with the environments. This is not possible in our proposed offline semi-supervised setup where the agents are only provided with Tlabelled and Tunlabelled. 3.2 TRAINING PIPELINE RL policies trained on low to moderate quality offline trajectories are often sub-optimal, as many of the trajectories might not have high return and only cover a limited part of the state space. Our goal is to find a way to combine the action labelled trajectories and the unlabelled action-free trajectories, so that the offline agent can exploit structures in the unlabelled data to improve performance. One natural strategy is to fill in proxy actions for those unlabelled trajectories, and use the annotated data together with the labelled data as a whole to train an offline RL agent. Since we assume both the labelled and unlabelled trajectories contain the states, we can train an inverse dynamics model (IDM) ϕ that predicts actions using the states. Once we obtain the IDM, we use it to generate the proxy actions for the unlabelled trajectories. Finally, we combine those proxy-labelled trajectories with the labelled trajectories, and train an agent using the offline RL algorithm of choice. In particular, we propose a stochastic multi-transition IDM (see Section 3.3), which is favored by our experiments. Our meta-algorithmic pipeline is summarized in Algorithm 1. Remarks. The annotation process, which involves training an IDM on the labelled data and generating proxy actions for the unlabelled trajectories, is similar to one step of self-training (Fralick, 1967). A key difference is that in self-training, the predictor is trained in multiple rounds. Once an initial predictor is trained, it is used for obtaining annotations on the unlabelled dataset. Then, a subset of annotated data is selected according to certain criteria, and added into the training set for the next round. As opposed to self-training, we do not retrain the IDM but directly move to the next stage, where we train the agent using the combined data. There are a few reasons that we do not employ self-training for IDM. First, it is computationally expensive to execute multiple rounds of training. More importantly, our end goal is to obtain a downstream policy with improved performance via utilizing the proxy-labelled data. One commonly used data selection criterion for self-training is based on the model uncertainty. There, one adds the proxy-labelled data with sufficiently low predictive uncertainty into the training set for the next round. However, we empirically found that such an uncertainty based augmentation strategy did not improve the performance of SS-ORL agents. See Section 4.3 and Appendix F for the experiment details. 3.3 STOCHASTIC MULTI-TRANSITION INVERSE DYNAMIC MODEL In past work (Pathak et al., 2017), the IDM typically maps two subsequent states pst, st`1q to at. We introduce a multi-transition IDM that predicts at using both transitions before and after timestep t, which we found works better empirically. More precisely, our inverse dynamic model which predicts at using 2k ` 1 transitions, including the current transition pst, st`1q, the previous k transitions that leads to st, and the next k transitions starting from st`1. We call k the transition size parameter. Let st´k:t`k`1 denote the sequence sminp0,t´kq, . . . , st, st`1, . . . , smaxp|τ |,t`k`1qq. Specifically, we model the distribution of at as a multivariate Gaussian distribution with a diagonal covariance matrix: at „ N ` µθpst´k:t`k`1q, Σθpst´k:t`k`1q ˘ . (1) Let ϕθpat|st´k:t`k`1q be the probability density function of N ` µθpst´k:t`k`1q, Σθpst´k:t`k`1q ˘ . Given the labelled trajectories Tlabelled, we minimize the negative log-likelihood loss Eat,st´k:t`k`1„Tlabelled r´ log ϕθpat|st´k:t`k`1qs. Note that the standard IDM which predicts at from pst, st`1q under the ℓ2 loss, is a special case subsumed by our model: it is equivalent to the case k “ 0 and the diagonal entries of Σθ (i.e., the variances of each action dimension) are all the same. Choosing k ą 0 allows us to account for non-Markovian behaviour policies and partially observable MDP (POMDP), see Appendix E.1. For all the experiments in this paper, we use k “ 1. We ablate this design choice in Section 4.3. 4 EXPERIMENTS Our experiments aim to answer three primary questions: 1. Can SS-ORL closely track or even match the performance for fully supervised offline reinforcement learning, when only a small subset of trajectories are labelled? 2. How does the performance of SS-ORL vary as a function of the size and qualities of the labelled and unlabelled datasets? 3. Do different offline RL methods respond differently under varying setups of data size and qualities? To answer these questions, we focus on three Gym locomotion tasks hopper, walker, and halfcheetah, and we use the v2 medium-expert, medium and medium-replay datasets1 from the D4RL benchmark (Fu et al., 2020). We address the first question in Section 4.1 and the other two in Section 4.2, respectively. Finally, we discuss the design choices for SS-ORL in Section 4.3. 4.1 BENCHMARKING Data Setup For a given offline dataset, we subsample 10% of the total trajectories from the dataset, whose returns are from the bottom q%, 10 ď q ď 100. We keep the actions for those trajectories, and discard the actions for the rest. We call this setup the coupled setup, since Plabelled and Punlabelled will change simultaneously when we vary the value of q. When q “ 100, we are uniformly sampling the trajectories and we have Plabelled “ Punlabelled. Under this setup, we always have 10% trajectories labelled and 90% unlabelled, and the total amount of data used later for training the offline RL agent is the original offline dataset size. This allows us to easily compare our results with results under the standard, fully labelled setup. In Section 4.2, we shall decouple the distributions Plabelled and Punlabelled for a thorough understanding of their individual influences. Inverse Dynamic Model We train an IDM as described in Section 3 with parameter k “ 1. In other words, the IDM predicts at using 3 consecutive transitions: pst´1, st, st`1, st`2q. The mean and the covariance matrix are predicted by two independent multilayer perceptrons (MLPs), each contains two hidden layers and 1024 hidden units per layer. To prevent overfitting, we randomly sample 10% of the labelled trajectories as the validation set, and use the IDM that yields the best validation error within 100k training iterations. Offline RL Methods We instantiate Algorithm 1 with DT, CQL and TD3BC (Fujimoto & Gu, 2021) and test their performances. Among these methods, DT is a recently proposed conditional behavior cloning (BC) method that uses sequence modeling tools to model the trajectories; CQL is a representative value-based offline RL method; and TD3BC is a hybrid method which adds a BC term to regularize the Q-learning updates. We refer to those instantiations as SS-DT, SS-CQL and SS-TD3BC, respectively. We defer the implementation details to Appendix A. Results We compare the performance of those SS-ORL agents with corresponding baseline and oracle agents. The baseline agents are trained on the labelled trajectories only, and the oracle agents are trained on the full offline dataset with action labels. Intuitively, the performances of the baseline and the oracle agents can be considered as the (estimated) lower and upper bounds for the performance of the SS-ORL agents. For each method, we train 5 instances under different seeds, and 1Due to the space limit, the results on medium and medium-replay datasets are deferred to Appendix C. for each instance we run 30 evaluation trajectories. We report the average return and the standard deviation after 200k iterations. Figure 4.1 plots the results on medium-expert datasets. For all the three environments and all the three offline RL methods, the SS-ORL agents improve upon the baselines. Remarkably, even when the labelled data quality is low, the SS-ORL agents are able to obtain decent returns. For example, when q “ 10, i.e., the labelled trajectories are the bottom 10% trajectories, the average return obtained by SS-TD3BC is 0.93, 0.91 and 0.79 for hopper, walker and halfcheetah. On average, this is 87.4% relative to the oracle performance (1.02, 1.1 and 0.89). As the value q increases, the labelled data quality increases and the distributions Plabelled and Punlabelled are getting closer. The performance of the SS-ORL agents also keeps increasing and finally matches the performance of the oracle agents. Similar observations can be found in the results of medium and medium-replay datasets, see Figure C.1 and C.2. We found relatively suboptimal results for DT on halfcheetah in all cases, consistent with prior results in Zheng et al. (2022). 4.2 ABLATION STUDY We conduct experiments to understand the semi-supervised approach from the perspective of both datasets and learning algorithms. For a systematic study, we depart from the coupled setup in Section 4.1 and consider a decoupling of the labelled data distributions Plabelled and the unlabelled data distribution Punlabelled. We first vary the quality of the labelled and unlabelled trajectories, and examine how the final performance of those SS-ORL agents changes. Next, we vary the size of the labelled and unlabelled trajectories and investigate their influences. To understand how the value-based methods and the BC methods will potentially react differently under these data setups. we report the results of SS-CQL and SS-DT for the aforementioned setups. Last, we ablate the design choice of the transition size k for the proposed IDM. For all the experiments we present, the results are aggregated over 5 instances with different seeds. Quality of Unlabelled Data We divide the trajectories from the hopper-medium-expert dataset into 3 groups, which consist of trajectories whose returns are the bottom 0% to 33%, 33% to 67%, and 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | labelled~High baseline (labeled data only) unlabelled~Low oracle unlabelled~Med oracle unlabelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | labelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | labelled~Low 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | labelled~High baseline (labeled data only) unlabelled~Low oracle unlabelled~Med oracle unlabelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | labelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | labelled~Low (a) We fix the labelled data quality and vary the unlabelled data quality. When the labelled data quality is low or moderate, SS-ORL can significantly improve the performance upon the baselines by utilizing high quality unlabelled data. 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | unlabelled~High labelled~Low oracle labelled~Med oracle labelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | unlabelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-DT | unlabelled~Low 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | unlabelled~High labelled~Low oracle labelled~Med oracle labelled~High oracle 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | unlabelled~Med 5×104 105 1.5×105 2×105 iteration 0.4 0.6 0.8 1.0 no rm al ize d re tu rn SS-CQL | unlabelled~Low (b) We fix the unlabelled data quality and vary the labelled data quality. The performance of SS-ORL improves as the labelled data quality increases. Figure 4.2: The return (average and standard deviation) of SS-DT and SS-CQL agents trained on the hopper-medium-expert dataset, when the qualities of the labelled and unlabelled data vary. Both the sizes of the labelled and unlabelled data are 10% of the offline dataset size. 67% to 100%, respectively. We refer to them Low, Medium, and High quality groups. In particular, the High group contains trajectories generated by the expert agents (Fu et al., 2020). As before, we report the performance of DT and CQL agents trained on the labelled data only as the baselines. We also report the results under the oracle mode, where we fill in the unlabelled trajectories with the true actions, and combine them with the labelled trajectories to train offline RL agents. We first report the performance of SS-DT when the labelled data is sampled from the High group, and the unlabelled data are sampled from Low, Med, and High groups, respectively. Both the size of the labelled and unlabelled trajectories are 10% of the total offline dataset size. The top left panel of Figure 4.2a plots the results. Clearly, when the labelled data quality is high, training on the labelled data only is sufficient to achieve the expert performance, and adding unlabelled data does not bring extra benefits. We repeat the same experiment when the labelled data is sampled from Medium and Low, see the top middle and top right panels of Figure 4.2a. For those cases, adding unlabelled data with the higher or the same quality2 improves the performance, whereas the lower quality unlabelled 2When the labelled and unlabelled data are sampled from the same quality group, we are simply adding more data from the same distribution. data is not significantly helpful. The performance of SS-CQL follows the same trends, see the bottom panels of Figure 4.2a. To summarize, the experiments provide strong evidence that when the labelled data is of low or moderate quality, SS-ORL is capable to exploit the high quality unlabelled data and remarkably boosts the performance compared with the baselines. The resulting performance is close to that of the oracle agent, and is often optimal (at least 1) or near-optimal (close to 1). Quality of Labelled Data Similarly, we fixed the unlabelled data quality and vary the quality of the labelled data. Figure 4.2b shows the results. For both SS-DT and SS-CQL, increasing the labelled data quality raises the performance for all the cases. Size of Labelled Data We train SS-ORL agents where we fix the number of unlabelled trajectories to be 10% of the total number of offline trajectories, and vary the number of labelled trajectories as 10%, 25%, and 50% of the total size. Similar to the above experiments, we consider four data quality setups, where the labelled and unlabelled trajectories are sampled from the bottom half (denoted by L) and top half (denoted by T) trajectories, respectively. We consider both hopper-medium-expert and walker-medium-expert datasets. To take account of different environments and data setups, we report the 95% stratified bootstrap confidence intervals (CIs) of the interquartile mean3 of the return for all these cases and training instances (Agarwal et al., 2021). We use 50000 bootstrap replications to generate the CIs. Compared with some other statistics like the mean or the median, the IQM is robust to outliers and also a good representative of the overall performance. The stratified bootstrapping is a handy tool to obtain CIs with descent coverage rate, even if one only have a small number of training instances per setup. We refer the readers to Agarwal et al. (2021) for the complete introduction. Figure 4.3a plots the confidence intervals when we consider all four quality setups, or when the labelled data quality is low or high, respectively. We found that SS-DT and SS-CQL respond slightly differently. Overall, SS-CQL is almost immune to changes in the size of the labelled data, as is SS-DT when the labelled data quality is high. However, SS-DT’s performance moderately increases as the labelled size grows when the labelled data quality is low. More detailed results, including the plots of the evaluation curves, CIs of the mean and the median, can be found in Appendix D. Size of Unlabelled Data As before, we vary the number of labelled data size with the unlabelled data size fixed, and report the 95% stratified bootstrap CIs in Figure 4.3b. Similarly, SS-CQL is almost insensitive whereas SS-DT is sensitive when the labelled quality is low. 3The interquartile mean of a list of sorted numbers is the mean of the middle 50% numbers. Value-based vs. Conditional BC As discussed above, SS-CQL is insensitive to the data size changes, whereas SS-DT is more responsive when the labelled data quality is low. Regarding the data quality, we are mostly interested in the scenarios where the labelled data quality is low or moderate, see the red and blue curves of Figure 4.2b. In that regime, if the unlabelled data quality is high (the left column), the distribution shift from the labelled data to the unlabelled data is challenging to handle, and the proxy-actions predicted by the IDM will be less accurate. There, the absolute performance of SS-CQL is slightly better than SS-DT, with smaller performance gaps compared to the oracle agents. If the unlabelled data quality is moderate or low (the middle and right columns), SS-CQL clearly outperforms SS-DT. Both observations suggest that SS-CQL is less sensitive to the action quality. 4.3 DESIGN CHOICES Transition Size k of the IDM We train SS-TD3BC, SS-CQL and SS-DT agents on the hopper-medium-expert dataset, under the coupled setup as in Section 4.1. We consider 6 different values of q: 10, 30, 50, 70, 90 and 100. For all the 18 setups (3 SS-ORL agents and 6 different q values), we train the agents using the multi-transition IDM with k “ 0, 1, 2 respectively. As in the previous section, Figure E.1 plots the 95% stratified bootstrapped CIs for the IQM return across all the setups and training instances, which are generated by 50000 bootstrap replications. The results favors the choice k “ 1. See Appendix E for more experiment details, such as the average return for each setup and the CIs for the return mean. Data Augmentation Strategy As discussed in Section 3.2, we consider variants of SS-TD3BC and SS-DT using uncertainty based data augmentation. Following Lakshminarayanan et al. (2017), we train an ensemble of 3 independent IDMs on Tlabelled. We generate the proxy actions for the unlabelled trajectories using the combined model, and also estimate the predictive uncertainties. We then only add proxy-labelled data whose uncertainties are below p% to the final RL training dataset. Specifically, we test 4 values of p: 25, 50, 75 and 95. We compare the results with standard SS-TD3BC and SS-DT where all the proxy-labelled data are added into the final RL training dataset. Again, we consider both the hopper-medium-expert and walker-medium-expert datasets and use the coupled setup under with 4 different q values: 10, 30, 68, 75 for hopper-medium-expert and 10, 30, 54, 60 for walker-medium-expert. Figure 4.5 plots the 95% stratified bootstrap CIs of the IQM return across all the setups. Adding all the proxy-labelled data without filtering outperforms uncertainty based data augmentation; see Appendix F for more details. Intuitively, to make use of the unlabelled data, most of the SSL pipelines would assume Plabelled and Punlabelled are similar or even the same (Chapelle et al., 2006). This is not the case in our setup, where Plabelled only generates low return trajectories, and all the high return ones come from Punlabelled. It remains an open question if self-training with the uncertainty based selection rule can help us generalize to high return trajectories. 5 DISCUSSION We proposed a novel setup for offline RL where the trajectories do not have all of the action information, for which we have introduced a semi-supervised meta-algorithmic pipeline. Our experiments identified key properties that enable the agents to learn from unlabelled data and show that near-optimal learning can be done with only 10% of the actions labelled for low-to-moderate quality trajectories. It would be interesting to study other heterogeneous data setups for offline RL in the future, including reward-free or pure state-only settings. This work is a step towards a broader goal of empowering robotic systems with the ability to extract meaningful knowledge from copious and ever-growing amounts of unlabelled demonstration data. Beyond simply not having the action labels, many trajectories may be from different robotic systems or tasks and therefore are not directly transferable to the system and task at hand. As we continue building robotic systems to leverage these forms of auxiliary knowledge, we expect that weakly-supervised learning paradigms such as the one explored in this work will be useful. A EXPERIMENT DETAILS In this section, we provide more details about our experiments. For all the offline RL methods we consider, we use our own implementations adopted from the following codebases: DT https://github.com/facebookresearch/online-dt TD3BC https://github.com/sfujim/TD3_BC CQL https://github.com/scottemmons/youngs-cql We use the stochastic DT proposed by Zheng et al. (2022). For offline RL, its performance is similar to the deterministic DT (Chen et al., 2021). The policy parameter is optimized by the LAMB optimizer (You et al., 2019) with ε “ 10´8. The log-temperature parameter is optimized by the Adam optimzier (Kingma & Ba, 2014). The architecture and other hyperparameters are listed in Tabel A.1. For TD3BC, we optimize both the critic and actor parameters by the Adam optimizer. The complete hyperparameters are listed in Table A.2. For CQL, we also use the Adam optimizer to optimize the critic, actor and the log-temperature parameters. The architecture of critic and actor networks and the other hyperparameters are listed in Table A.3. We use batch size 256 and context length 20 for DT, where each batch contains 5120 states. Correspondingly, we use batch size 5120 for CQL and TD3BC. B THE RETURN DISTRIBUTIONS OF THE D4RL DATASETS C ADDITIONAL EXPERIMENTS UNDER THE COUPLED SETUP We conduct experiments on the medium and medium-replay datasets of D4RL benchmark, using the same setup as in Section 4.1. Figure C.1 and C.2 reports the results. The general trend is as the same as that in Figure 4.1. We note that the results on the halfcheetah-medium dataset are less informative than the others. This is because the data distributions of halfcheetah-medium is very concentrated, similar to a Gaussian distribution with small variance, see Figure B.1. In such a case, varying the value of q does not drastically change the labelled data distribution. One may notice that for the hopper-medium-replay and walker-medium-replay datasets, SS-ORL does not catch up with the oracle as quickly as on the other datasets as q increases. Our intuition is that the return distributions of these two datasets concentrate on extremely low values, as shown in Figure B.1. In our experiments, the labelled trajectories for those two datasets have average return small than 0.1 even when q “ 70. In contrast, the return distributions of the other datasets concentrate on larger values. For the halfcheetah-medium-replay and all the medium and medium-expert datasets, increasing the value of q will greatly change the returns of labelled trajectories, see Table C.1. To demonstrate the performance of SS-ORL on dataset with a more wide return distribution, we consider a subsampled dataset for the walker environment generated as follows. 1. Combine the walker-medium-replay and walker-medium datasets. 2. Let Rmin and Rmax denote the minimum and maximum return in the dataset. We divide the trajectories into 40 bins, where the maximum returns within each bin are linear spaced between Rmin and Rmax. Let ni be the number trajectories in bin i. 3. We randomly sample 1000 trajectories. To sample a trajectory, we first first sample a bin i P r1, . . . , 40s with weights proportional to 1{ni, then sample a trajectory uniformly at random from the sampled bin. Figure C.3 plots the return distribution of the subsampled dataset. It is wide and has 3 modes. We run the same experiments as before on this subsampled dataset, and Figure C.4 plots the results. We can see that SS-ORL methods can catch up with the oracle agents even when q is small. D INFLUENCES OF THE LABELLED AND UNLABELLED DATA SIZE Figure D.1 plots the average return of SS-DT and SS-CQL when we fix the number of unlabelled trajectories and vary the number of labelled trajectories. We found that there is a bad seed for SS-CQL when both labelled and unlabelled trajectories are sampled from L and the size of labelled data is 10%, so that the result there the bottom right panel) exhibits large variance. Correspondingly, Figure D.2 plots the 95% stratified bootstrap CIs for the median, mean, interquartile mean, and the optimality gap of the return of SS-DT and SS-CQL. Similarly, Figure D.3 and D.4 plots the results when we vary the number of unlabelled trajectories, while the number of unlabelled ones is fixed. E TRANSITION SIZE k FOR THE MULTI-TRANSITION INVERSE DYNAMIC MODEL E.1 THEORY Let β denote the behaviour policy. When k “ 0, the IDM is modeling Ppat|st`1, stq “ Ppat, st`1|stq Ppst`1|stq “ Ppst`1|at, stqβpat|stq Ppst`1|stq . (2) For the cases where k ą 0, w.l.o.g, we assume k “ 1. The IDM is modeling Ppat|st`2, st`1, st, st´1q “ Ppat, st`2, . . . , st´1q Ppst`2, . . . , st´1q “ Ppst`1|at, st, st`2, st´1qPpat|st`2, st, st´1q{Ppst`1|st`2, st, st´1q “ Ppst`1|at, stqβpat|st, st´1q{Ppst`1|st, st´1q, (3) where in the last line we used the fact that the policy β can only generate actions based on previous states, the Markovian transition property Ppst`1|at, st, st`2, st´1q “ Ppst`1|at, stq, and also the induced property Ppst`1|st, st´1q “ Ppst`1|st`2, st, st´1q. If the behaviour policy β is Markovian, we have that βpat|stq “ βpat|st, st´1q, and as a consequence Ppat|st`1, stq “ Ppat|st`2, st`1, st, st´1q ¨ Cpst`1, st, st´1q, (4) where C “ Ppst`1|stqPpst`1|st,st´1q is independent of the action at. Therefore, the probabilities that the IDMs with k “ 0 and k “ 1 are modeling, are equivalent up to a state-only dependent scaling. The cases where k ě 2 can be derived analogously. In practice, the offline dataset might contain trajectories generated by multiple behaviour policies and it is unknown if any of them is Markovian. Therefore, choosing k ą 0 allows us to take into account past information before timestep t. For future, we do not need anything beyond st`1 for a MDP, but our formulation is general purpose to account for POMDP as well, where both past and future partial observations might be needed to infer the action at. To summarize, choosing k ě 0 is more general and have been shown to be favorable in the empirical experiments presented in next section. E.2 EMPIRICAL EXPERIMENTS We train SS-TD3BC, SS-CQL and SS-DT with 3 IDM transition size: k = 0, 1 and 2 on the hopper-medium-expert dataset. We use the coupled setup described in Section 4.1, with 6 different values of q. Table E.1 reports the performance of those agents for each case. In addition to the interquartile mean considered in Section 4.3, we also consider 3 other statistics of the return across all the setups: the mean, the median and the optimality gap. Figure 4.3 plots the 95% stratified bootstrap confidence intervals for all the four statistics, genereated by 50000 bootstrap replications. q “ 10 q “ 30 q “ 50 q “ 70 q “ 90 q “ 100 Average k “ 0 0.81 ˘ 0.12 0.89 ˘ 0.05 0.93 ˘ 0.05 1.05 ˘ 0.04 1.03 ˘ 0.06 1.01 ˘ 0.04 0.95 SS-TD3BC k “ 1 0.93 ˘ 0.07 1.01 ˘ 0.05 0.86 ˘ 0.06 0.98 ˘ 0.06 1.03 ˘ 0.06 1.03 ˘ 0.04 0.98 k “ 2 0.80 ˘ 0.12 0.91 ˘ 0.03 0.93 ˘ 0.05 0.95 ˘ 0.08 1.01 ˘ 0.06 1.04 ˘ 0.02 0.94 k “ 0 0.69 ˘ 0.17 0.69 ˘ 0.15 0.88 ˘ 0.15 1.04 ˘ 0.04 1.11 ˘ 0.01 1.10 ˘ 0.03 0.92 SS-CQL k “ 1 0.69 ˘ 0.15 0.90 ˘ 0.05 0.89 ˘ 0.13 1.03 ˘ 0.07 1.07 ˘ 0.08 1.11 ˘ 0.01 0.95 k “ 2 0.90 ˘ 0.11 0.90 ˘ 0.09 0.86 ˘ 0.11 1.08 ˘ 0.05 1.10 ˘ 0.01 1.11 ˘ 0.01 0.99 k “ 0 0.72 ˘ 0.17 0.75 ˘ 0.20 0.90 ˘ 0.14 1.06 ˘ 0.04 1.11 ˘ 0.00 1.11 ˘ 0.01 0.94 SS-DT k “ 1 0.69 ˘ 0.20 0.94 ˘ 0.07 0.99 ˘ 0.05 1.05 ˘ 0.04 1.11 ˘ 0.00 1.11 ˘ 0.00 0.98 k “ 2 0.78 ˘ 0.07 0.89 ˘ 0.08 0.85 ˘ 0.15 1.05 ˘ 0.02 1.11 ˘ 0.00 1.11 ˘ 0.00 0.97 Table E.1: The return (average and standard deviation) of SS-ORL agents trained on the hoppermedium-expert dataset under the coupled setup, where the IDM is trained with 2 different values of k: 0, 1 and 2. Results aggregated over 5 training instances. 0.92 0.96 1.00 1.04 k=0 k=1 k=2 Median 0.92 0.96 1.00 1.04 IQM 0.92 0.96 1.00 1.04 Mean 0.06 0.09 0.12 0.15 Optimality Gap Normalized Return Figure E.1: The 95% stratified bootstrap CIs of four statistics (the median, mean, interquartile mean, and the optimality gap) of the returns obtained by SS-ORL agents, with different values of k. F DATA AUGMENTATION STRATEGY Following Lakshminarayanan et al. (2017), we train an ensemble of 3 independent IDMs on Tlabelled. Each individual IDM models the action as a diagonal Gaussian distribution (see Equation (1)) N pµi,Σiq, i “ 1, 2, 3. The ensemble models the action using a equally weighted Gaussian mixture of these three distributions. We predict the action by the mixture’s mean and predict the uncertainty by the mixture’s variance; both can be written in close form. We conduct experiments for SS-DT and SS-TD3BC, where we only add proxy-labelled data whose uncertainties are below p% to the final RL training dataset. Specifically, we test 4 values of p: 25, 50, 75 and 95.:w We compare the results with standard SS-DT and SS-TD3BC where all the proxy-labelled data are added into the final RL training dataset. We consider both the hopper-medium-expert and walker-medium-expert datasets. We use the coupled setup described in Section 4.1, where we consider 4 different values of q: 10, 30, 68, 75 for hoppermedium-expert and 10, 30, 54, 60 for walker-medium-expert. Table F.1 reports the average return and standard deviation obtained by SS-DT and SS-TD3BC under different data augmentation strategies, when trained on the hopper-medium-expert dataset. The results on the walker-medium-expert dataset are reported in Table F.2. It is easy to see that uncertainty based data augmentation degrades the performance, compared with adding all the proxylabelled data without filtering. Overall, the latter performs consistently well across different setups. Figure F.1 plots the 95% stratified bootstrap CIs for this experiments. All the statistics favor the no filtering strategy. hopper-medium-expert q “ 10 q “ 30 q “ 68 q “ 75 Average below 25% 0.60 ˘ 0.03 0.62 ˘ 0.02 0.71 ˘ 0.04 0.86 ˘ 0.02 0.70 below 50% 0.62 ˘ 0.02 0.66 ˘ 0.06 0.76 ˘ 0.04 0.86 ˘ 0.09 0.72 SS-TD3BC below 75% 0.70 ˘ 0.06 0.74 ˘ 0.07 0.84 ˘ 0.06 0.94 ˘ 0.08 0.80 below 95% 0.82 ˘ 0.05 0.82 ˘ 0.09 0.90 ˘ 0.09 0.96 ˘ 0.06 0.88 no filtering 0.80 ˘ 0.07 0.92 ˘ 0.04 0.91 ˘ 0.06 0.94 ˘ 0.10 0.89 below 25% 0.61 ˘ 0.12 0.62 ˘ 0.05 0.70 ˘ 0.01 0.95 ˘ 0.13 0.72 below 50% 0.60 ˘ 0.14 0.65 ˘ 0.04 0.69 ˘ 0.02 1.04 ˘ 0.07 0.75 SS-DT below 75% 0.42 ˘ 0.04 0.63 ˘ 0.15 0.75 ˘ 0.06 1.04 ˘ 0.04 0.71 below 95% 0.51 ˘ 0.16 0.82 ˘ 0.12 0.85 ˘ 0.05 1.06 ˘ 0.03 0.81 no filtering 0.47 ˘ 0.14 0.71 ˘ 0.14 0.83 ˘ 0.07 1.06 ˘ 0.03 0.77 Table F.1: The return (average and standard deviation) of SS-ORL agents trained on the hoppermedium-expert dataset under the coupled setup, using different data augmentation strategies. Results aggregated over 5 training instances. walker-medium-expert q “ 10 q “ 30 q “ 54 q “ 60 Average below 25% 0.82 ˘ 0.02 0.82 ˘ 0.01 0.80 ˘ 0.06 1.04 ˘ 0.06 0.87 below 50% 0.83 ˘ 0.03 0.84 ˘ 0.02 0.84 ˘ 0.01 1.02 ˘ 0.09 0.88 SS-TD3BC below 75% 0.74 ˘ 0.11 0.86 ˘ 0.01 0.85 ˘ 0.01 1.04 ˘ 0.07 0.87 below 95% 0.86 ˘ 0.04 0.88 ˘ 0.01 0.87 ˘ 0.01 1.10 ˘ 0.01 0.93 no filtering 0.86 ˘ 0.05 0.86 ˘ 0.03 0.87 ˘ 0.01 1.10 ˘ 0.01 0.92 below 25% 0.69 ˘ 0.04 0.74 ˘ 0.02 0.70 ˘ 0.03 0.84 ˘ 0.17 0.74 below 50% 0.67 ˘ 0.03 0.72 ˘ 0.02 0.73 ˘ 0.03 0.95 ˘ 0.15 0.77 SS-DT below 75% 0.71 ˘ 0.03 0.60 ˘ 0.13 0.73 ˘ 0.03 0.95 ˘ 0.14 0.74 below 95% 0.73 ˘ 0.08 0.52 ˘ 0.11 0.58 ˘ 0.15 0.98 ˘ 0.10 0.70 no filtering 0.79 ˘ 0.05 0.55 ˘ 0.13 0.69 ˘ 0.08 0.91 ˘ 0.15 0.74 Table F.2: The return (average and standard deviation) of SS-ORL agents trained on the walkermedium-expert dataset under the coupled setup, using different data augmentation strategies. Results aggregated over 5 training instances. 0.75 0.80 0.85 below 25% below 50% below 75% below 95% no filtering Median 0.75 0.80 0.85 Mean 0.76 0.80 0.84 0.88 IQM 0.16 0.20 0.24 Optimality Gap Normalized Return Figure F.1: The 95% stratified bootstrap CIs of four statistics (the median, mean, interquartile mean, and the optimality gap) of the returns obtained by SS-ORL agents, when combined with different data augmentation strategies. G COMPARISON WITH GATO UNDER THE COUPLED SETUP Inspired the multi-task and multi-modal generalist agent proposed by Reed et al. (2022), we consider a GATO type of variant of DT that can incorporate the unlabelled data into policy training. GATO is trained on the labelled and unlabelled data simultaneously. The implementation details are: • We form the same input sequence as DT, where we fill in zeros for the missing actions for unlabelled trajectories. • For the labelled trajectories, GATO predicts the actions, states and rewards; for the unlabelled ones, GATO only predicts the states and rewards. • We use the stochastic policy as in online decision transformer (Zheng et al., 2022) to predict the actions. • We use deterministic predictors for the states and rewards, which are single linear layers built on top of the Transformer outputs. Let gt “ ř|τ |rt1 t1“t be the return-to-go of a trajectory τ at timestep t. Let H Plabelled θ denotes the policy entropy included on the labelled data distribution. For simplicity, we assume the context length of GATO is 1. We refer the readers to Zheng et al. (2022) for the formulation with a general context length and more details. Equation (5) shows the training objective of GATO. min θ Epat,st,rt,gtq„Plabelled ␣ ´ log πpat|st, gt, θq ` λs}st ´ pstpθq}22 ` λr}rt ´ prtpθq}22 ( ` Epst,rt,gtq„Punlabelled ␣ λs}st ´ pstpθq}22 ` λr}rt ´ prtpθq}22 ( s.t. HPlabelledθ ra|s, gs ě ν (5) 20 40 60 80 100 q (labelled data quality param) 0.4 0.6 0.8 1.0 no rm al ize d re tu rn hopper-medium-expert SS-DT DT baseline GATO DT oracle 20 40 60 80 100 q (labelled data quality param) 0.4 0.6 0.8 1.0 no rm al ize d re tu rn hopper-medium-expert SS-CQL SS-DT SS-TD3BC GATO Figure G.1: (The performance of SS-ORL and GATO on the hopper-medium-expert dataset. For GATO, we use λs “ 0.01 and λr “ 1.0. (L) SS-DT significantly outperforms GATO, where GATO only slightly improves upon the baseline. (R) SS-CQL, SS-DT and SS-TD3BC all outperform GATO. 0.60 0.64 0.68 s = 1.0, r = 1.0 s = 1.0, r = 0.1 s = 1.0, r = 0.01 s = 1.0, r = 0.001 s = 0.1, r = 1.0 s = 0.1, r = 0.1 s = 0.1, r = 0.01 s = 0.1, r = 0.001 s = 0.01, r = 1.0 s = 0.01, r = 0.1 s = 0.01, r = 0.01 s = 0.01, r = 0.001 s = 0.001, r = 1.0 s = 0.001, r = 0.1 s = 0.001, r = 0.01 s = 0.001, r = 0.001 IQM Normalized Return Figure G.2: The 95% stratified bootstrap CIs of four statistics (the median, mean, interquartile mean, and the optimality gap) of the returns obtained by GATO agents, with different combinations of regularization parameters. The constant ν, λs and λr are prefixed hyper-parameters, where ν is the target policy entropy, and λs and λr are regularization parameters used to balance the losses for actions, states, and rewards. We use ν “ ´dimpAq as for DT (see Appendix A). To choose the regularization parameters λs and λr for GATO, we test 16 combinations where λs and λr are 1.0, 0.1, 0.01 and 0.001 respectively. We run experiments as in Section 4.1 for q “ 10, 30, 50, 70, 90, 100, and compute the confidence intervals for the aggregated results. Figure G.2 shows that λs “ 0.01 and λr “ 0.1 yield the best performance. Figure G.1 compares the performance of GATO (with λs “ 0.01 and λr “ 0.1) and SS-ORL agents. It is clear that SS-ORL agents outperform GATO. H PERFORMANCE GAP OF SS-ORL AGENTS For a chosen offline RL method, the relative performance gap between the corresponding SS-ORL and oracle agents illustrates how sensitive this offline RL is to missing actions: Oracle-ORL ´ SS-ORL Oracle-ORL . (6) We consider the coupled setup as in Section 4.1. For each of the 9 datasets (hopper,walker,halfcheetah with medium-expert, medium, and medium-replay datasets), we compute the relative performance gap for SS-CQL, SS-DT and SS-TD3BC, trained with 6 different values of q: 10, 30, 50, 70, 90 and 100. Table H.1 reports the aggregate results over 5 seeds. On average, SS-CQL and SS-TD3BC have smaller relative performance gap, suggesting that CQL and TD3BC are less sensitive to the missing actions. method hopper-me walker2d-me hc-me hopper-m walker2d-m hc-m hopper-mr walker2d-mr hc-mr Average SS-CQL 0.147 0.114 0.062 0.078 0.077 0.003 0.388 0.379 0.106 0.150 SS-TD3BC 0.046 0.094 0.104 0 0.065 0.001 0.327 0.412 0.057 0.123 SS-DT 0.119 0.167 0.0002 0.016 0.039 0.003 0.399 0.554 0.109 0.156 Table H.1: The relative performance gap of SS-CQL, SS-TD3BC, and SS-DT.
1. What is the focus of the paper regarding offline reinforcement learning? 2. What are the strengths and weaknesses of the proposed semi-supervised method? 3. Do you have any concerns or questions regarding the experimental results and their generalizability? 4. Are there any minor issues or suggestions you have for improving the paper's clarity or grammar?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper provides an empirical study about a setting where a portion of the offline RL dataset doesn't include actions. To exploit action-missing data, this work proposes to learn an inverse dynamics model on data with actions to generate proxy actions from state transitions. A set of empirical studies on d4rl gym-locomotion control are provided to give insights into how such semi-supervised learning will be helpful for the performance of the final policy. The ablation studies show the proposed semi-supervised method is particularly helpful when the action-missing data is of high quality and labelled data is of lower quality. Strengths And Weaknesses Strength: The setting is well-motivated and has decent potential for real-world applications. The empirical studies are thorough and in particular, ablation over the quality of data is well-done. Weaknesses: The proposed method is quite standard in online settings. The claimed novelty (multiple transitions as input to the inverse model) is more of technical detail and the reason why it is helpful for the Markovian setting is not explained properly. Most of the experiment results in the paper are based on the medium-expert dataset of d4rl gym-locomotion. This raises questions about whether the conclusion can be generalised to the setting with more diverse data. For medium-expert, the behaviour policy is basically a mixture of two policies (medium level and expert level). If the trajectories are split into multiple groups according to the returns, it's likely that the root cause of the varied returns is initial states rather than the quality of the policy. One clue is the experiments on medium-replay data in Figure C.2 show for hopper and walker2d, the quality of the unlabelled data plays a much more important role than the case in the medium and medium expert. Minor issues Is "label" a proper word to replace action? My impression is unlabelled data in offline RL is more about reward-missing data. The intuition behind that is humans can label rewards to trajectories easily because it's a scale but "labelling" actions seems like very difficult. There are some obvious grammatical errors in the paper: e.g. "we are mainly interested in the case only a significant majority of the trajectories in the offline trajectories are unlabeled." "How can we utilize the unlabelled data for improving the performance offline RL algorithms?" Clarity, Quality, Novelty And Reproducibility The paper is easy to follow in general and the writing is clear. The quality of empirical evaluations is fine. The methodology originality is low but the empirical study on the offline RL setting is novel.
ICLR
Title Analyzing Inverse Problems with Invertible Neural Networks Abstract For many applications, in particular in natural science, the task is to determine hidden system parameters from a set of measurements. Often, the forward process from parameterto measurement-space is well-defined, whereas the inverse problem is ambiguous: multiple parameter sets can result in the same measurement. To fully characterize this ambiguity, the full posterior parameter distribution, conditioned on an observed measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task – so-called Invertible Neural Networks (INNs). Unlike classical neural networks, which attempt to solve the ambiguous inverse problem directly, INNs focus on learning the forward process, using additional latent output variables to capture the information otherwise lost. Due to invertibility, a model of the corresponding inverse process is learned implicitly. Given a specific measurement and the distribution of the latent variables, the inverse pass of the INN provides the full posterior over parameter space. We prove theoretically and verify experimentally, on artificial data and real-world problems from medicine and astrophysics, that INNs are a powerful analysis tool to find multi-modalities in parameter space, uncover parameter correlations, and identify unrecoverable parameters. 1 Introduction When analyzing complex physical systems, a common problem is that the system parameters of interest cannot be measured directly. For many of these systems, scientists have developed sophisticated theories on how measurable quantities y arise from the hidden parameters x. We will call such mappings the forward process. However, the inverse process is required to infer the hidden states of a system from measurements. Unfortunately, the inverse is often both intractable and ill-posed, since crucial information is lost in the forward process. To fully assess the diversity of possible inverse solutions for a given measurement, an inverse solver should be able to estimate the complete posterior of the parameters, conditioned on an observation. This makes it possible to quantify uncertainty, reveal multi-modal distributions, and identify degenerate and unrecoverable parameters – all highly relevant for applications in natural science. In this paper, we ask if invertible neural networks (INNs) are a suitable model class for this task. INNs are characterized by three properties: (i) The mapping from inputs to outputs is bijective, i.e. its inverse exists, (ii) both forward and inverse mapping are efficiently computable, and (iii) both mappings have a tractable Jacobian, which allows explicit computation of posterior probabilities. Networks that are invertible by construction offer a unique opportunity: We can train them on the well-understood forward process x→ y and get the inverse y→ x for free by running them backwards at prediction time. To counteract the inherent information loss of the forward process, we introduce additional latent output variables z, which capture the information about x that is not contained in y. Thus, our INN learns to associate hidden parameter values x with unique pairs [y, z] of measurements and latent variables. Forward training optimizes the mapping [y, z] = f(x) and implicitly determines its inverse x = f−1(y, z) = g(y, z). Additionally, we make sure that the density p(z) of the latent variables is shaped as a Gaussian distribution. Thus, the INN represents the desired posterior p(x |y) by a deterministic function x = g(y, z) that transforms (“pushes”) the known distribution p(z) to x-space, conditional on y. Compared to standard approaches (see Fig. 1, left), INNs circumvent a fundamental difficulty of learning inverse problems: Defining a sensible supervised loss for direct posterior learning is problematic since it requires prior knowledge about that posterior’s behavior, constituting a kind of hen-end-egg problem. If the loss does not match the possibly complicated (e.g. multimodal) shape of the posterior, learning will converge to incorrect or misleading solutions. Since the forward process is usually much simpler and better understood, forward training diminishes this difficulty. Specifically, we make the following contributions: • We show that the full posterior of an inverse problem can be estimated with invertible networks, both theoretically in the asymptotic limit of zero loss, and practically on synthetic and real-world data from astrophysics and medicine. • The architectural restrictions imposed by invertibility do not seem to have detrimental effects on our network’s representational power. • While forward training is sufficient in the asymptotic limit, we find that a combination with unsupervised backward training improves results on finite training sets. • In our applications, our approach to learning the posterior compares favourably to approximate Bayesian computation (ABC) and conditional VAEs. This enables identifying unrecoverable parameters, parameter correlations and multimodalities. 2 Related work Modeling the conditional posterior of an inverse process is a classical statistical task that can in principle be solved by Bayesian methods. Unfortunately, exact Bayesian treatment of real-world problems is usually intractable. The most common (but expensive) solution is to resort to sampling, typically by a variant of Markov Chain Monte Carlo (Robert and Casella, 2004; Gamerman and Lopes, 2006). If a model y = s(x) for the forward process is available, approximate Bayesian computation (ABC) is often preferred, which embeds the forward model in a rejection sampling scheme for the posterior p(x|y) (Sunnåker et al., 2013; Lintusaari et al., 2017; Wilkinson, 2013). Variational methods offer a more efficient alternative, approximating the posterior by an optimally chosen member of a tractable distribution family (Blei et al., 2017). Neural networks can be trained to predict accurate sufficient statistics for parametric posteriors (Papamakarios and Murray, 2016; Siddharth et al., 2017), or can be designed to learn a mean-field distribution for the network’s weights via dropout variational inference (Gal and Ghahramani, 2015; Kingma et al., 2015). Both ideas can be combined (Kendall and Gal, 2017) to differentiate between data-related and model-related uncertainty. However, the restriction to limited distribution families fails if the true distribution is too complex (e.g. when it requires multiple modes to represent ambiguous or degenerate solutions) and essentially counters the ability of neural networks to act as universal approximators. Conditional GANs (cGANs; Mirza and Osindero, 2014; Isola et al., 2017) overcome this restriction in principle, but often lack satisfactory diversity in practice (Zhu et al., 2017b). For our tasks, conditional variational autoencoders (cVAEs; Sohn et al., 2015) perform better than cGANs, and are also conceptually closer to our approach (see appendix Sec. 2), and hence serve as a baseline in our experiments. Generative modeling via learning of a non-linear transformation between the data distribution and a simple prior distribution (Deco and Brauer, 1995; Hyvärinen and Pajunen, 1999) has the potential to solve these problems. Today, this approach is often formulated as a normalizing flow (Tabak et al., 2010; Tabak and Turner, 2013), which gradually transforms a normal density into the desired data density and relies on bijectivity to ensure the mapping’s validity. These ideas were applied to neural networks by Deco and Brauer (1995); Rippel and Adams (2013); Rezende and Mohamed (2015) and refined by Tomczak and Welling (2016); Berg et al. (2018); Trippe and Turner (2018). Today, the most common realizations use auto-regressive flows, where the density is decomposed according to the Bayesian chain rule (Kingma et al., 2016; Huang et al., 2018; Germain et al., 2015; Papamakarios et al., 2017; Oord et al., 2016; Kolesnikov and Lampert, 2017; Salimans et al., 2017; Uria et al., 2016). These networks successfully learned unconditional generative distributions for artificial data and standard image sets (e.g. MNIST, CelebA, LSUN bedrooms), and some encouraging results for conditional modeling exist as well (Oord et al., 2016; Salimans et al., 2017; Papamakarios et al., 2017; Uria et al., 2016). These normalizing flows possess property (i) of an INN, and are usually designed to fulfill requirement (iii) as well. In other words, flow-based networks are invertible in principle, but the actual computation of their inverse is too costly to be practical, i.e. INN property (ii) is not fulfilled. This precludes the possibility of bi-directional or cyclic training, which has been shown to be very beneficial in generative adversarial nets and auto-encoders (Zhu et al., 2017a; Dumoulin et al., 2016; Donahue et al., 2017; Teng et al., 2018). In fact, optimization for cycle consistency forces such models to converge to invertible architectures, making fully invertible networks a natural choice. True INNs can be built using coupling layers, as introduced in the NICE (Dinh et al., 2014) and RealNVP (Dinh et al., 2016) architectures. Despite their simple design and training, these networks were rarely studied: Gomez et al. (2017) used a NICE-like design as a memory-efficient alternative to residual networks, Jacobsen et al. (2018) demonstrated that the lack of information reduction from input to representation does not cause overfitting, and Schirrmeister et al. (2018) trained such a network as an adverserial autoencoder. Danihelka et al. (2017) showed that minimization of an adversarial loss is superior to maximum likelihood training in RealNVPs, whereas the Flow-GAN of Grover et al. (2017) performs even better using bidirectional training, a combination of maximum likelihood and adverserial loss. The Glow architecture by Kingma and Dhariwal (2018) incorporates invertible 1x1 convolutions into RealNVPs to achieve impressive image manipulations. This line of research inspired us to extend RealNVPs for the task of computing posteriors in real-world inverse problems from natural and life sciences. 3 Methods 3.1 Problem specification We consider a common scenario in natural and life sciences: Researchers are interested in a set of variables x ∈ RD describing some phenomenon of interest, but only variables y ∈ RM can actually be observed, for which the theory of the respective research field provides a model y = s(x) for the forward process. Since the transformation from x to y incurs an information loss, the intrinsic dimension m of y is in general smaller than D, even if the nominal dimensions satisfy M > D. Hence we want to express the inverse model as a conditional probability p(x |y), but its mathematical derivation from the forward model is intractable in the applications we are going to address. We aim at approximating p(x |y) by a tractable model q(x |y), taking advantage of the possibility to create an arbitrary amount of training data {(xi,yi)}Ni=1 from the known forward model s(x) and a suitable prior p(x). While this would allow for training of a standard regression model, we want to approximate the full posterior probability. To this end, we introduce a latent random variable z ∈ RK drawn from a multi-variate standard normal distribution and reparametrize q(x |y) in terms of a deterministic function g of y and z, represented by a neural network with parameters θ: x = g(y, z; θ) with z ∼ p(z) = N (z; 0, IK). (1) Note that we distinguish between hidden parameters x representing unobservable real-world properties and latent variables z carrying information intrinsic to our model. Choosing a Gaussian prior for z poses no additional limitation, as proven by the theory of non-linear independent component analysis (Hyvärinen and Pajunen, 1999). In contrast to standard methodology, we propose to learn the model g(y, z; θ) of the inverse process jointly with a model f(x; θ) approximating the known forward process s(x): [y, z] = f(x; θ) = [fy(x; θ), fz(x; θ)] = g −1(x; θ) with fy(x; θ) ≈ s(x). (2) Functions f and g share the same parameters θ and are implemented by a single invertible neural network. Our experiments show that joint bi-directional training of f and g avoids many complications arising in e.g. cVAEs or Bayesian neural networks, which have to learn the forward process implicitly. The relation f = g−1 is enforced by the invertible network architecture, provided that the nominal and intrinsic dimensions of both sides match. When m ≤M denotes the intrinsic dimension of y, the latent variable z must have dimension K = D −m, assuming that the intrinsic dimension of x equals its nominal dimension D. If the resulting nominal output dimension M +K exceeds D, we augment the input with a vector x0 ∈ RM+K−D of zeros and replace x with the concatenation [x,x0] everywhere. Combining these definitions, our network expresses q(x |y) as q ( x = g(y, z; θ) |y ) = p(z) ∣∣Jx∣∣−1, Jx = det( ∂g(y, z; θ) ∂[y, z] ∣∣∣∣ y,fz(x) ) (3) with Jacobian determinant Jx. When using coupling layers, according to Dinh et al. (2016), computation of Jx is simple, as each transformation has a triangular Jacobian matrix. 3.2 Invertible architecture To create a fully invertible neural network, we follow the architecture proposed by Dinh et al. (2016): The basic unit of this network is a reversible block consisting of two complementary affine coupling layers. Hereby, the block’s input vector u is split into two halves, u1 and u2, which are transformed by an affine function with coefficients exp(si) and ti (i ∈ {1, 2}), using element-wise multiplication ( ) and addition: v1 = u1 exp(s2(u2)) + t2(u2), v2 = u2 exp(s1(v1)) + t1(v1). (4) Given the output v = [v1,v2], these expressions are trivially invertible: u2 = (v2 − t1(v1)) exp(−s1(v1)), u1 = (v1 − t2(u2)) exp(−s2(u2)). (5) Importantly, the mappings si and ti can be arbitrarily complicated functions of v1 and u2 and need not themselves be invertible. In our implementation, they are realized by a succession of several fully connected layers with leaky ReLU activations. A deep invertible network is composed of a sequence of these reversible blocks. To increase model capacity, we apply a few simple extensions to this basic architecture. Firstly, if the dimension D is small, but a complex transformation has to be learned, we find it advantageous to pad both the in- and output of the network with an equal number of zeros. This does not change the intrinsic dimensions of in- and output, but enables the network’s interior layers to embed the data into a larger representation space in a more flexible manner. Secondly, we insert permutation layers between reversible blocks, which shuffle the elements of the subsequent layer’s input in a randomized, but fixed, way. This causes the splits u = [u1,u2] to vary between layers and enhances interaction among the individual variables. Kingma and Dhariwal (2018) use a similar architecture with learned permutations. 3.3 Bi-directional training Invertible networks offer the opportunity to simultaneously optimize for losses on both the inand output domains (Grover et al., 2017), which allows for more effective training. Hereby, we perform forward and backward iterations in an alternating fashion, accumulating gradients from both directions before performing a parameter update. For the forward iteration, we penalize deviations between simulation outcomes yi = s(xi) and network predictions fy(xi) with a loss Ly ( yi, fy(xi) ) . Depending on the problem, Ly can be any supervised loss, e.g. squared loss for regression or cross-entropy for classification. The loss for latent variables penalizes the mismatch between the joint distribution of network outputs q ( y = fy(x), z = fz(x) ) = p(x)/|Jyz| and the product of marginal distributions of simulation outcomes p ( y = s(x) ) = p(x)/|Js| and latents p(z) as Lz ( q(y, z), p(y) p(z) ) . We block the gradients of Lz with respect to y to ensure the resulting updates only affect the predictions of z and do not worsen the predictions of y. Thus, Lz enforces two things: firstly, the generated z must follow the desired normal distribution p(z); secondly, y and z must be independent upon convergence (i.e. p(z |y) = p(z)), and not encode the same information twice. As Lz is implemented by Maximum Mean Discrepancy D (Sec. 3.4), which only requires samples from the distributions to be compared, the Jacobian determinants Jyz and Js do not have to be known explicitly. In appendix Sec. 1, we prove the following theorem: Theorem: If an INN f(x) = [y, z] is trained as proposed, and both the supervised loss Ly=E[(y−fy(x))2] and the unsupervised loss Lz=D ( q(y, z), p(y) p(z) ) reach zero, sampling according to Eq. 1 with g=f−1 returns the true posterior p(x |y∗) for any measurement y∗. Although Ly and Lz are sufficient asymptotically, a small amount of residual dependency between y and z remains after a finite amount of training. This causes q(x |y) to deviate from the true posterior p(x |y). To speed up convergence, we also define a loss Lx on the input side, implemented again by MMD. It matches the distribution of backward predictions q(x) = p ( y = fy(x) ) p ( z = fz(x) ) /|Jx| against the prior data distribution p(x) through Lx ( p(x), q(x) ) . In the appendix, Sec. 1, we prove that Lx is guaranteed to be zero when the forward losses Ly and Lz have converged to zero. Thus, incorporating Lx does not alter the optimum, but improves convergence in practice. Finally, if we use padding on either network side, loss terms are needed to ensure no information is encoded in the additional dimensions. We a) use a squared loss to keep those values close to zero and b) in an additional inverse training pass, overwrite the padding dimensions with noise of the same amplitude and minimize a reconstruction loss, which forces these dimensions to be ignored. 3.4 Maximum mean discrepancy Maximum Mean Discrepancy (MMD) is a kernel-based method for comparison of two probability distributions that are only accessible through samples (Gretton et al., 2012). While a trainable discriminator loss is often preferred for this task in high-dimensional problems, especially in GAN-based image generation, MMD also works well, is easier to use and much cheaper, and leads to more stable training (Tolstikhin et al., 2017). The method requires a kernel function as a design parameter, and we found that kernels with heavier tails than Gaussian are needed to get meaningful gradients for outliers. We achieved best results with the Inverse Multiquadratic k(x,x′) = 1/(1 + ‖(x− x′)/h‖22), reconfirming the Ground truth INN, all losses INN, only Ly + Lz INN, only Lx Figure 2: Viability of INN for a basic inverse problem. The task is to produce the correct (multi-modal) distribution of 2D points x, given only the color label y∗. When trained with all loss terms from Sec. 3.3, the INN output matches ground truth almost exactly (2nd image). The ablations (3rd and 4th image) show that we need Ly and Lz to learn the conditioning correctly, whereas Lx helps us remain faithful to the prior. suggestion from Tolstikhin et al. (2017). Since the magnitude of the MMD depends on the kernel choice, the relative weights of the losses Lx, Ly, Lz are adjusted as hyperparameters, such that their effect is about equal. 4 Experiments We first demonstrate the capabilities of INNs on two well-behaved synthetic problems and then show results for two real-world applications from the fields of medicine and astrophysics. Additional details on the datasets and network architectures are provided in the appendix. 4.1 Artificial data Gaussian mixture model: To test basic viability of INNs for inverse problems, we train them on a standard 8-component Gaussian mixture model p(x). The forward process is very simple: The first four mixture components (clockwise) are assigned label y = red, the next two get label y = blue, and the final two are labeled y = green and y = purple (Fig. 2). The true inverse posteriors p(x |y∗) consist of the mixture components corresponding to the given one-hot-encoded label y∗. We train the INN to directly regress one-hot vectors y using a squared loss Ly, so that we can provide plain one-hot vectors y∗ to the inverse network when sampling p(x |y∗). We observe the following: (i) The INN learns very accurate approximations of the posteriors and does not suffer from mode collapse. (ii) The coupling block architecture does not reduce the network’s representational power – results are similar to standard networks of comparable size (see appendix Sec. 2). (iii) Bidirectional training works best, whereas forward training alone (using only Ly and Lz) captures the conditional relationships properly, but places too much mass in unpopulated regions of x-space. Conversely, pure inverse training (just Lx) learns the correct x-distribution, but loses all conditioning information. Inverse kinematics: For a task with a more complex and continuous forward process, we simulate a simple inverse kinematics problem in 2D space: An articulated arm moves vertically along a rail and rotates at three joints. These four degrees of freedom constitute the parameters x. Their priors are given by a normal distribution, which favors a pose with 180◦ angles and centered origin. The forward process is to calculate the coordinates of the end point y, given a configuration x. The inverse problem asks for the posterior distribution over all possible inputs x that place the arm’s end point at a given y position. An example for a fixed y∗ is shown in Fig. 3, where we compare our INN to a conditional VAE (see appendix Fig. 7 for conceptual comparison of architectures). Adding Inverse Autoregressive Flow (IAF, Kingma et al., 2016) does not improve cVAE performance in this case (see appendix, Table 2). The y∗ chosen in Fig. 3 is a hard example, as it is unlikely under the prior p(x) (Fig. 3, right) and has a strongly bi-modal posterior p(x |y∗). In this case, due to the computationally cheap forward process, we can use approximate Bayesian computation (ABC, see appendix Sec. 7) to sample from the ground truth posterior. Compared to ground truth, we find that both INN and cVAE recover the two symmetric modes well. However, the true end points of x-samples produced by the cVAE tend to miss the target y∗ by a wider margin. This is because the forward process x→ y is only learned implicitly during cVAE training. See appendix for quantitative analysis and details. 4.2 Real-world applications After demonstrating the viability on synthetic data, we apply our method to two real world problems from medicine and astronomy. While we focus on the medical task in the following, the astronomy application is shown in Fig. 5. In medical science, the functional state of biological tissue is of interest for many applications. Tumors, for example, are expected to show changes in oxygen saturation sO2 (Hanahan and Weinberg, 2011). Such changes cannot be measured directly, but influence the reflectance of the tissue, which can be measured by multispectral cameras (Lu and Fei, 2014). Since ground truth data can not be obtained from living tissue, we create training data by simulating observed spectra y from a tissue model x involving sO2 , blood volume fraction vhb, scattering magnitude amie, anisotropy g and tissue layer thickness d (Wirkert et al., 2016). This model constitutes the forward process, and traditional methods to learn point estimates of the inverse (Wirkert et al., 2016; 2017; Claridge and Hidovic-Rowe, 2013) are already sufficiently reliable to be used in clinical trials. However, these methods can not adequately express uncertainty and ambiguity, which may be vital for an accurate diagnosis. Competitors. We train an INN for this problem, along with two ablations (as in Fig. 2), as well as a cVAE with and without IAF (Kingma et al., 2016) and a network using the method of Kendall and Gal (2017), with dropout sampling and additional aleatoric error terms for each parameter. The latter also provides a point-estimate baseline (classical NN) when used without dropout and error terms, which matches the current state-of-the-art results in Wirkert et al. (2017). Finally, we compare to ABC, approximating p(x |y∗) with the 256 samples closest to y∗. Note that with enough samples ABC would produce the true posterior. We performed 50 000 simulations to generate samples for ABC at test time, taking one week on a GPU, but still measure inconsistencies in the posteriors. The learning-based methods are trained within minutes, on a training set of 15 000 samples generated offline. Error measures. We are interested in both the accuracy (point estimates), and the shape of the posterior distributions. For point estimates x̂, i.e. MAP estimates, we compute the deviation from ground-truth values x∗ in terms of the RMSE over test set observations y∗, RMSE = √ Ey∗[‖x̂− x∗‖2]. The scores are reported both for the main parameter of interest sO2 , and the parameter subspace of sO2 , vhb, amie, which we found to be the only recoverable parameters. Furthermore, we check the re-simulation error: We apply the simulation s(x̂) to the point estimate, and compare the simulation outcome to the conditioning y∗. To evaluate the shape of the posteriors, we compute the calibration error for the sampling-based methods, based on the fraction of ground truth inliers αinl. for corresponding α-confidence-region of the marginal posteriors of x. The reported error is the median of |αinl. − α| over all α. All values are computed over 5000 test-set observations y∗, or 1000 observations in the case of re-simulation error. Each posterior uses 4096 samples, or 256 for ABC; all MAP estimates are found using the mean-shift algorithm. Quantitative results. Evaluation results for all methods are presented in Table 1. The INN matches or outperforms other methods in terms of point estimate error. Its accuracy deteriorates slightly when trained without Lx, and entirely when trained without the conditioning losses Ly and Lz, just as in Fig. 2. For our purpose, the calibration error is the most important metric, as it summarizes the correctness of the whole posterior distribution in one number (see appendix Fig. 11). Here, the INN has a big lead over cVAE(-IAF) and Dropout, and even over ABC due to the low ABC sample count. Qualitative results. Fig. 4 shows generated parameter distributions for one fixed measurement y∗, comparing the INN to cVAE-IAF, Dropout sampling and ABC. The three former methods use a sample count of 160 000 to produce smooth curves. Due to the sparse posteri- ors of 256 samples in the case of ABC, kernel density estimation was applied to its results, with a bandwidth of σ = 0.1. The results produced by the INN provide relevant insights: First, we find that the posteriors for layer thickness d and anisotropy g match the shape of their priors, i.e. y∗ holds no information about these parameters – they are unrecoverable. This finding is supported by the ABC results, whereas the other two methods misleadingly suggest a roughly Gaussian posterior. Second, we find that the sampled distributions for the blood volume fraction vhb and scattering amplitude amie are strongly correlated (rightmost plot). This phenomenon is not an analysis artifact, but has a sound physical explanation: As blood volume fraction increases, more light is absorbed inside the tissue. For the sensor to record the same intensities y∗ as before, scattering must be increased accordingly. In Fig. 10 in the appendix, we show how the INN is applied to real multispectral images. 5 Conclusion We have shown that the full posterior of an inverse problem can be estimated with invertible networks, both theoretically and practically on problems from medicine and astrophysics. We share the excitement of the application experts to develop INNs as a generic tool, helping them to better interpret their data and models, and to improve experimental setups. As a side effect, our results confirm the findings of others that the restriction to coupling layers does not noticeably reduce the expressive power of the network. In summary, we see the following fundamental advantages of our INN-based method compared to alternative approaches: Firstly, one can learn the forward process and obtain the (more complicated) inverse process ‘for free’, as opposed to e.g. cGANs, which focus on the inverse and learn the forward process only implicitly. Secondly, the learned posteriors are not restricted to a particular parametric form, in contrast to classical variational methods. Lastly, in comparison to ABC and related Bayesian methods, the generation of the INN posteriors is computationally very cheap. In future work, we plan to systematically analyze the properties of different invertible architectures, as well as more flexible models utilizing cycle losses, in the context of representative inverse problem. We are also interested in how our method can be scaled up to higher dimensionalities, where MMD becomes less effective. Acknowledgments LA received funding by the Federal Ministry of Education and Research of Germany, project ‘High Performance Deep Learning Framework’ (No 01IH17002). JK, CR and UK received financial support from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No 647769). SW and LMH received funding from the European Research Council (ERC) starting grant COMBIOSCOPY (637960). EWP, DR, and RSK acknowledge support by Collaborative Research Centre (SFB 881) ‘The Milky Way System’ (subprojects B1, B2 and B8), the Priority Program SPP 1573 ‘Physics of the Interstellar Medium’ (grant numbers KL 1358/18.1, KL 1358/19.2 and GL 668/2-1) and the European Research Council in the ERC Advanced Grant STARLIGHT (project no. 339177) 1 Proof of correctness of generated posteriors Lemma: If some bijective function f : x→ z transforms a probability density pX(x) to pZ(z), then the inverse function f−1 transforms pZ(z) back to pX(x). Proof: We denote the probability density obtained through the reverse transformation as p∗X(x). Therefore, we have to show that p ∗ X(x) = pX(x). For the forward direction, via the change-of-variables formula, we have pZ(z) = pX ( x = f−1(z) ) ∣∣det[∂z(f−1)]∣∣ (6) with the Jacobian ∂zf−1 ≡ ∂f−1i /∂zj . For the reverse transformation, we have p∗X(x) = pZ ( z = f(x) ) |det[∂xf ]| . (7) We can substitute pZ from Eq. 6 and obtain p∗X(x) = pX ( x = f−1(f(x)) ) ∣∣∣det [(∂z(f−1))(∂xf)]∣∣∣ (8) = pX(x) ∣∣∣det [(∂zf−1)(∂xf)]∣∣∣ (9) = pX(x) |det[I]| = pX(x). (10) In Eq. 9, the Jacobians cancel out due to the inverse function theorem, i.e. the Jacobian ∂z(f −1) is the matrix inverse of ∂xf . Theorem: If an INN f(x) = [y, z] is trained as proposed, and both the supervised loss Ly=E[(y−fy(x))2] and the unsupervised loss Lz=D ( q(y, z), p(y) p(z) ) reach zero, sampling according to Eq. 1 with g=f−1 returns the true posterior p(x |y∗) for any measurement y∗. Proof: We denote the chosen latent distribution as pZ(z), the distribution of observations as pY (y), and the joint distribution of network outputs as q(y, z). As shown by Gretton et al. (2012), if the MMD loss converges to 0, the network outputs follow the prescribed distribution: Lz = 0 ⇐⇒ q(y, z) = pY (y) pZ(z) (11) Suppose we take a posterior conditioned on a fixed y∗, i.e. p(x |y∗), and transform it using the forward pass of our perfectly converged INN. From this we obtain an output distribution q∗(y, z). Because Ly = 0, we know that the output distribution of y (marginalized over z) must be q∗(y) = δ(y−y∗). Also, because of the independence between z and y in the output, the distribution of z-outputs is still q∗(z) = pZ(z). So the joint distribution of outputs is q∗(y, z) = δ(y − y∗) pZ(z) (12) When we invert the network, and repeatedly input y∗ while sampling z ∼ pZ(z), this is the same as sampling [y, z] from the q∗(y, z) above. Using the Lemma from above, we know that the inverted network will output samples from p(x |y∗). Corollary: If the conditions of the theorem above are fulfilled, the unsupervised reverse loss Lx = D ( q(x), pX(x) ) between the marginalized outputs of the inverted network, q(x), and the prior data distribution, pX(x), will also be 0. This justifies using the loss on the prior to speed up convergence, without altering the final results. Proof: Due to the theorem, the estimated posteriors generated by the INN are correct, i.e. q(x |y∗) = p(x |y∗). If they are marginalized over observations y∗ from the training data, then q(x) will be equal to pX(x) by definition. As shown by Gretton et al. (2012), this is equivalent to Lx = 0. 2 Artificial data – Gaussian mixture In Sec. 4.1, we demonstrate that the proposed INN can approximate the true posteriors very well and is not hindered by the required coupling block architecture. Here we show how some existing methods do on the same task, using neural networks of similar size as the INN. cGAN Training a conditional GAN of network size comparable to the INN (counting only the generator) and only two noise dimensions turned out to be challenging. Even with additional pre-training to avoid mode collapse, the individual modes belonging to one label are reduced to nearly one-dimensional structures. Larger cGAN In order to match the results of the INN, we trained a more complex cGAN with 2M parameters instead of the previous 10K, and a latent dimension of 128, instead of 2. To prevent mode collapse, we introduced an additional regularization: an extra loss term forces the variance of generator outputs to match the variance of the training data prior. With these changes, the cGAN can be seen to recover the posteriors reasonably well. Generator + MMD Another option is to keep the cGAN generator the same size as our INN, but replace the discriminator with an MMD loss (cf. Sec. 3.4). This loss receives a concatenation of the generator output x and the label y it was supplied with, and compares these batch-wise with the concatenation of ground truth (x,y)-pairs. Note that in contrast to this, the corresponding MMD loss of the INN only receives x, and no information about y. For this small toy problem, we find that the hand-crafted MMD loss dramatically improves results compared to the smaller learned discriminator. cVAE We also compare to a conditional Variational Autoencoder of same total size as the INN. There is some similarity between the training setup of our method (Fig. 7, right) and that of cVAE (Fig. 7, left), as the forward and inverse pass of an INN can also be seen as an encoder-decoder pair. The main differences are that the cVAE learns the relationship x → y only indirectly, since there is no explicit loss for it, and that the INN requires no reconstruction loss, since it is bijective by construction. cVAE-IAF We adapt the cVAE to use Inverse Autoregressive Flow (Kingma et al., 2016) between the encoder and decoder. On the Gaussian mixture toy problem, the trained cVAE-IAF generates correct posteriors on par with our INN (see Fig. 6). Dropout sampling The method of dropout sampling with learned error terms is by construction not able to produce multi-modal outputs, and therefore fails on this task. 2.1 Latent space analysis To analyze how the latent space of our INN is structured for this task, we choose a fixed label y∗ and sample z from a dense grid. For each z, we compute x through our inverse network and colorize this point in latent (z) space according to the distance from the closest mode in x-space. We can see that our network learns to shape the latent space such that each mode receives the expected fraction of samples (Fig. 8). 3 Artificial data – inverse kinematics A short video demonstrating the structure of our INN’s latent space can be found under https://gfycat.com/SoggyCleanHog, for a slightly different arm setup. The dataset is constucted using gaussian priors xi ∼ N (0, σi), with σ1 = 0.25 and σ2 = σ3 = σ4 = 0.5 ∧ = 28.65◦. The forward process is given by y1 = x1 + l1 sin(x2) + l2 sin(x3 − x2) + l3 sin(x4 − x2 − x3) (13) y2 = l1 cos(x2) + l2 cos(x3 − x2) + l3 cos(x4 − x2 − x3) (14) with the arm lenghts l1 = 0.5, l2 = 0.5, l3 = 1.0. To judge the quality of posteriors, we quantify both the re-simulation error and the calibration error over the test set, as in Sec. 4.2 of the paper. Because of the cheap simulation, we average the re-simulation error over the whole posterior, and not only the MAP estimate. In Table 2, we find that the INN has a clear advantage in both metrics, confirming the observations from Fig. 3. 4 Multispectral measurements of biological tissue The following figure shows the results when the INN trained in Sec. 4.2 is applied pixel-wise to multispectral endoscopic footage. In addition to estimating the oxygenation sO2 , we measure the uncertainty in the form of the 68% confidence interval. a) Median sO2 0.40 0.45 0.50 0.55 0.60 0.65 0.70 b) Est. uncertainty 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 c) RGB image 5 Star cluster spectral data Results for one specific y are shown in Fig. 5. Note that our network recovers a decidedly multimodal distribution of x that visibly deviates from the prior p(x). Note also the strong correlations in the system. For example, the measurements y∗ investigated may correspond to a young cluster with large expansion velocity, or to an older system that expands slowly. Finding these ambiguities in p(x |y∗) and identifying degeneracies in the underlying model are pivotal aspects of astrophysical research, and a method to effectively approximate full posterior distributions has the potential to lead to a major breakthrough in this field. 6 Calibration curve for tissue parameter estimation In Sec. 4.2, we report the median calibration error for each method. The following figure plots the calibration error, qinliers− q, against the level of confidence q. Negative values mean that a model is overconfident, while positive values say the opposite. 7 Approximate Bayesian computation (ABC) While there is a whole field of research concerned with ABC approaches and their efficiencyaccuracy tradeoffs, our use of the method here is limited to the essential principle of rejection sampling. When we require N samples of x from the posterior p(x |y∗) conditioned on some y∗, there are two basic ways to obtain them: Threshold: We set an acceptance threshold , repeatedly draw x-samples from the prior, compute the corresponding y-values (via simulation) and keep those where dist(y,y∗) < , until we have accepted N samples. The smaller we want , the more simulations have to be run, which is why we use this approach only for the experiment in Sec. 4.1, where we can afford to run the forward process millions or even billions of times. Quantile: Alternatively, we choose what quantile q of samples shall be accepted, and then run exactly N/q simulations. All sampled pairs (x,y) are sorted by dist(y,y∗) and the N closest to y∗ form the posterior. This allows for a more predictable runtime when the simulations are costly, as in the medical application in Sec. 4.2 where q = 0.005. 8 Details of datasets and network architectures Table 3 summarizes the datasets used throughout the paper. The architecture details are given in the following. 8.1 Artificial data – Gaussian mixture INN: 3 invertible blocks, 3 fully connected layers per affine coefficient function with ReLU activation functions in the intermediate layers, zero padding to a nominal dimension of 16, Adam optimizer, decaying learning rate from 10−3 to 10−5, batch size 200. The inverse multiquadratic kernel was used for MMD, with h = 0.2 in both x- and z-space. Dropout sampling: 6 fully connected layers with ReLU activations, Adam optimizer, learning rate decay from 10−3 to 10−5, batch size 200, dropout probability p = 0.2. cGAN: 6 fully connected layers for the generator and 8 for the discriminator, all with leaky ReLU activations. Adam was used for the generator, SGD for the discriminator, learning rates decaying from 2 · 10−3 to 2 · 10−6, batch size 256. Initially 100 iterations training with L = 1N ∑ i ‖g(zi, yi)− xi‖22, to separate the differently labeled modes, followed by pure GAN training. Larger cGAN: 2 fully connected layers with 1024 neurons each for discriminator and generator, batch size 512, Adam optimizer with learning rate 8 · 10−4 for the generator, SGD with learning rate 1.2 ·10−3 and momentum 0.05 for the discriminator, 1.6 ·10−3 weight decay for both, 0.25 dropout probabiliy for the generator at training and test time. Equal weighting of discriminator loss and penalty of output variance L = (Vari[g(zi, yi)]−Vari[xi])2 Generator with MMD: 8 fully connected layers with leaky ReLU activations, Adam optimizer, decaying learning rate from 10−3 to 10−6, batch size 256. Inverse multiquadratic kernel, h = 0.5. cVAE: 3 fully connected layers each for encoder and decoder, ReLU activations, learning rate 2 · 10−2, decay to 2.5 · 10−5, Adam optimizer, batch size 25, reconstruction loss weighted 50:1 versus KL divergence loss. 8.2 Artificial data – inverse kinematics INN: 6 affine coupling blocks with 3 fully connected layers each and leaky ReLU activations. Adam optimizer, decaying learning rate from 10−2 to 10−4, multiquadratic kernel with h = 1.2. cVAE: 4 fully connected layers each for encoder and decoder, ReLU activations, learning rate 5 ·10−3, decay to 1.6 ·10−5, Adam optimizer, batch size 250, reconstruction loss weighted 15:1 versus KL divergence loss. 8.3 Functional parameter estimation from multispectral tissue images INN: 3 invertible blocks, 4 fully connected layers per affine coefficient function with leaky ReLUs in the intermediate layers, zero padding to double the original width. Adam optimizer, learning rate decay from 2 · 10−3 to 2 · 10−5, batch size 200. Inverse multiquadratic kernel with h = 1, weighted MMD terms by observation distance with decaying γ = 0.2 to 0. Dropout sampling/point estimate: 8 fully connected layers, ReLU activations, Adam with decaying learning rate from 10−2 to 10−5, batch size 100, dropout probability p = 0.2. cVAE: 4 fully connected layers each for encoder and decoder, ReLU activations, learning rate 10−3, decay to 3.2 · 10−6, Adam optimizer, batch size 25, reconstruction loss weighted 103:1 versus KL divergence loss. 8.4 Impact of star clusters on the dynamical evolution of the galactic gas INN: 5 invertible blocks, 4 fully connected layers per affine coefficient function with leaky ReLUs in the intermediate layers, no additional zero padding. Adam optimizer with decaying learning rate from 2 · 10−3 to 1.5 · 10−6, batch size 500. Kernel for latent space: k(z, z′) = exp(−‖(z − z′)/h‖2) with h = 7.1. Kernel for x-space: k(x,x′) = −‖x − x′‖1/41/2. Due to the complex nature of the prior distributions, this was the kernel found to capture the details correctly, whereas the peak of the inverse multiquadratic kernel was too broad for this purpose.
1. What is the main contribution of the paper regarding invertible networks for ambiguous inverse problems? 2. What are the strengths and weaknesses of the proposed method, particularly in its experimental application? 3. Do you have any concerns about the theoretical and technical aspects of the work? 4. How does the reviewer assess the significance and limitations of the paper's content?
Review
Review 1) Summary The authors propose to use invertible networks to solve ambiguous inverse problems. This is done by training one group of Real-NVP output variables supervised while training the other group via maximum likelihood under a Gaussian prior as done in the standard Real-NVP. Further, the authors suggest to not only train the forward model, but also the inverse model with an MMD critic, similar to previous works that used a more flexible GAN critic [1]. 2) Clarity The paper is easy to understand and the main idea is well-motivated. 3) Significance The main contribution of this work is of conceptual nature and illustrates how invertible networks are a promising framework for many inverse problems. I really like the main idea and think it is inspiring. However, the experiments and technical contributions are rather limited. Theoretical / ML contribution: Using an MMD to factorize groups of latent variables is well-known and combining flow-based maximum likelihood training in the forward model with GAN-like objectives in the inverse model has been done before as well. Experimental contribution: I am not fully convinced by the experiments. The inverse kinematics experiment shows that the posterior collapses from large uncertainty to almost a point for the right-most joint. This seems like a negative result to me. The medical experiment also seems rather limited, because if I understand correctly the tissue data is artificial and the proposed INN only outperforms competitors (despite ABC) on two out of three measurements. Further, the authors should have explained the experimental setup of the tissue experiment better, as it is not a standard task in the field. In the astronomy experiment figure 4 shows strong correlations between some of the z variables, the authors claim that this is a feature of their method, but I argue that they should not be present if training with the factorial prior was successful. It would be good to show the correlation between y and z variables as well if they show high dependencies, learning was not very successful. Simply eyeballing the shape of the posterior is not enough to conclude independence. In summary, even though interesting, the significance of the experimental results is hard to judge and I am a bit worried that if the proposed model is making some strange mistakes on artificial toy-data, how well it will perform on challenging realistic problems. 4) Main Concerns The authors claim that specifying a prior/posterior distribution in density modeling is complicated and typically the chosen distributions are too simplistic. This argument is, of course, valid, but they also have the same problem and specify z to be factorial Gaussian. So the same "hen-and-egg" problem applies here. The authors also seem to suggest that they are the first to train flow-based models in forward and inverse direction, but this has already been done in the flow-GAN paper [1]. MMD does not easily scale to high-dimensional problems, this is not a problem here as all artificial problems considered are very low-dimensional. But when applying the proposed algorithm in realistic settings, one will likely need extensions of MMD, like used in MMD GANs, which would introduce min/max games on both sides of the network. This will likely be hard to train and constitutes a fundamental limitation of the approach that needs to be discussed. 5) Minor Concerns - Some basic citations on normalizing flows seem to be missing, e.g. [2,3]. - How does one guarantee that padded regions are actually zero on output when padding input with zeros? Small variance in those dimensions could potentially code important information. Is this considered as part of y or z? - The authors require the existence of inverse and set this equal to bijectivity, but injectivity would be sufficient. - The authors mention that z is conditioned on y, but in their notation, the conditional density p(z|y) never shows up explicitly. It should be made clear, that p(z)=p(z|y) is a consequence of their additional MMD penalty and only holds at convergence. [1] Grover et al., "Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models" [2] Tabak and Turner, "Density estimation by dual ascent of the log-likelihood" [3] Deco and Brauer, "Nonlinear higher-order statistical decorrelation by volume-conserving neural architectures"
ICLR
Title Analyzing Inverse Problems with Invertible Neural Networks Abstract For many applications, in particular in natural science, the task is to determine hidden system parameters from a set of measurements. Often, the forward process from parameterto measurement-space is well-defined, whereas the inverse problem is ambiguous: multiple parameter sets can result in the same measurement. To fully characterize this ambiguity, the full posterior parameter distribution, conditioned on an observed measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task – so-called Invertible Neural Networks (INNs). Unlike classical neural networks, which attempt to solve the ambiguous inverse problem directly, INNs focus on learning the forward process, using additional latent output variables to capture the information otherwise lost. Due to invertibility, a model of the corresponding inverse process is learned implicitly. Given a specific measurement and the distribution of the latent variables, the inverse pass of the INN provides the full posterior over parameter space. We prove theoretically and verify experimentally, on artificial data and real-world problems from medicine and astrophysics, that INNs are a powerful analysis tool to find multi-modalities in parameter space, uncover parameter correlations, and identify unrecoverable parameters. 1 Introduction When analyzing complex physical systems, a common problem is that the system parameters of interest cannot be measured directly. For many of these systems, scientists have developed sophisticated theories on how measurable quantities y arise from the hidden parameters x. We will call such mappings the forward process. However, the inverse process is required to infer the hidden states of a system from measurements. Unfortunately, the inverse is often both intractable and ill-posed, since crucial information is lost in the forward process. To fully assess the diversity of possible inverse solutions for a given measurement, an inverse solver should be able to estimate the complete posterior of the parameters, conditioned on an observation. This makes it possible to quantify uncertainty, reveal multi-modal distributions, and identify degenerate and unrecoverable parameters – all highly relevant for applications in natural science. In this paper, we ask if invertible neural networks (INNs) are a suitable model class for this task. INNs are characterized by three properties: (i) The mapping from inputs to outputs is bijective, i.e. its inverse exists, (ii) both forward and inverse mapping are efficiently computable, and (iii) both mappings have a tractable Jacobian, which allows explicit computation of posterior probabilities. Networks that are invertible by construction offer a unique opportunity: We can train them on the well-understood forward process x→ y and get the inverse y→ x for free by running them backwards at prediction time. To counteract the inherent information loss of the forward process, we introduce additional latent output variables z, which capture the information about x that is not contained in y. Thus, our INN learns to associate hidden parameter values x with unique pairs [y, z] of measurements and latent variables. Forward training optimizes the mapping [y, z] = f(x) and implicitly determines its inverse x = f−1(y, z) = g(y, z). Additionally, we make sure that the density p(z) of the latent variables is shaped as a Gaussian distribution. Thus, the INN represents the desired posterior p(x |y) by a deterministic function x = g(y, z) that transforms (“pushes”) the known distribution p(z) to x-space, conditional on y. Compared to standard approaches (see Fig. 1, left), INNs circumvent a fundamental difficulty of learning inverse problems: Defining a sensible supervised loss for direct posterior learning is problematic since it requires prior knowledge about that posterior’s behavior, constituting a kind of hen-end-egg problem. If the loss does not match the possibly complicated (e.g. multimodal) shape of the posterior, learning will converge to incorrect or misleading solutions. Since the forward process is usually much simpler and better understood, forward training diminishes this difficulty. Specifically, we make the following contributions: • We show that the full posterior of an inverse problem can be estimated with invertible networks, both theoretically in the asymptotic limit of zero loss, and practically on synthetic and real-world data from astrophysics and medicine. • The architectural restrictions imposed by invertibility do not seem to have detrimental effects on our network’s representational power. • While forward training is sufficient in the asymptotic limit, we find that a combination with unsupervised backward training improves results on finite training sets. • In our applications, our approach to learning the posterior compares favourably to approximate Bayesian computation (ABC) and conditional VAEs. This enables identifying unrecoverable parameters, parameter correlations and multimodalities. 2 Related work Modeling the conditional posterior of an inverse process is a classical statistical task that can in principle be solved by Bayesian methods. Unfortunately, exact Bayesian treatment of real-world problems is usually intractable. The most common (but expensive) solution is to resort to sampling, typically by a variant of Markov Chain Monte Carlo (Robert and Casella, 2004; Gamerman and Lopes, 2006). If a model y = s(x) for the forward process is available, approximate Bayesian computation (ABC) is often preferred, which embeds the forward model in a rejection sampling scheme for the posterior p(x|y) (Sunnåker et al., 2013; Lintusaari et al., 2017; Wilkinson, 2013). Variational methods offer a more efficient alternative, approximating the posterior by an optimally chosen member of a tractable distribution family (Blei et al., 2017). Neural networks can be trained to predict accurate sufficient statistics for parametric posteriors (Papamakarios and Murray, 2016; Siddharth et al., 2017), or can be designed to learn a mean-field distribution for the network’s weights via dropout variational inference (Gal and Ghahramani, 2015; Kingma et al., 2015). Both ideas can be combined (Kendall and Gal, 2017) to differentiate between data-related and model-related uncertainty. However, the restriction to limited distribution families fails if the true distribution is too complex (e.g. when it requires multiple modes to represent ambiguous or degenerate solutions) and essentially counters the ability of neural networks to act as universal approximators. Conditional GANs (cGANs; Mirza and Osindero, 2014; Isola et al., 2017) overcome this restriction in principle, but often lack satisfactory diversity in practice (Zhu et al., 2017b). For our tasks, conditional variational autoencoders (cVAEs; Sohn et al., 2015) perform better than cGANs, and are also conceptually closer to our approach (see appendix Sec. 2), and hence serve as a baseline in our experiments. Generative modeling via learning of a non-linear transformation between the data distribution and a simple prior distribution (Deco and Brauer, 1995; Hyvärinen and Pajunen, 1999) has the potential to solve these problems. Today, this approach is often formulated as a normalizing flow (Tabak et al., 2010; Tabak and Turner, 2013), which gradually transforms a normal density into the desired data density and relies on bijectivity to ensure the mapping’s validity. These ideas were applied to neural networks by Deco and Brauer (1995); Rippel and Adams (2013); Rezende and Mohamed (2015) and refined by Tomczak and Welling (2016); Berg et al. (2018); Trippe and Turner (2018). Today, the most common realizations use auto-regressive flows, where the density is decomposed according to the Bayesian chain rule (Kingma et al., 2016; Huang et al., 2018; Germain et al., 2015; Papamakarios et al., 2017; Oord et al., 2016; Kolesnikov and Lampert, 2017; Salimans et al., 2017; Uria et al., 2016). These networks successfully learned unconditional generative distributions for artificial data and standard image sets (e.g. MNIST, CelebA, LSUN bedrooms), and some encouraging results for conditional modeling exist as well (Oord et al., 2016; Salimans et al., 2017; Papamakarios et al., 2017; Uria et al., 2016). These normalizing flows possess property (i) of an INN, and are usually designed to fulfill requirement (iii) as well. In other words, flow-based networks are invertible in principle, but the actual computation of their inverse is too costly to be practical, i.e. INN property (ii) is not fulfilled. This precludes the possibility of bi-directional or cyclic training, which has been shown to be very beneficial in generative adversarial nets and auto-encoders (Zhu et al., 2017a; Dumoulin et al., 2016; Donahue et al., 2017; Teng et al., 2018). In fact, optimization for cycle consistency forces such models to converge to invertible architectures, making fully invertible networks a natural choice. True INNs can be built using coupling layers, as introduced in the NICE (Dinh et al., 2014) and RealNVP (Dinh et al., 2016) architectures. Despite their simple design and training, these networks were rarely studied: Gomez et al. (2017) used a NICE-like design as a memory-efficient alternative to residual networks, Jacobsen et al. (2018) demonstrated that the lack of information reduction from input to representation does not cause overfitting, and Schirrmeister et al. (2018) trained such a network as an adverserial autoencoder. Danihelka et al. (2017) showed that minimization of an adversarial loss is superior to maximum likelihood training in RealNVPs, whereas the Flow-GAN of Grover et al. (2017) performs even better using bidirectional training, a combination of maximum likelihood and adverserial loss. The Glow architecture by Kingma and Dhariwal (2018) incorporates invertible 1x1 convolutions into RealNVPs to achieve impressive image manipulations. This line of research inspired us to extend RealNVPs for the task of computing posteriors in real-world inverse problems from natural and life sciences. 3 Methods 3.1 Problem specification We consider a common scenario in natural and life sciences: Researchers are interested in a set of variables x ∈ RD describing some phenomenon of interest, but only variables y ∈ RM can actually be observed, for which the theory of the respective research field provides a model y = s(x) for the forward process. Since the transformation from x to y incurs an information loss, the intrinsic dimension m of y is in general smaller than D, even if the nominal dimensions satisfy M > D. Hence we want to express the inverse model as a conditional probability p(x |y), but its mathematical derivation from the forward model is intractable in the applications we are going to address. We aim at approximating p(x |y) by a tractable model q(x |y), taking advantage of the possibility to create an arbitrary amount of training data {(xi,yi)}Ni=1 from the known forward model s(x) and a suitable prior p(x). While this would allow for training of a standard regression model, we want to approximate the full posterior probability. To this end, we introduce a latent random variable z ∈ RK drawn from a multi-variate standard normal distribution and reparametrize q(x |y) in terms of a deterministic function g of y and z, represented by a neural network with parameters θ: x = g(y, z; θ) with z ∼ p(z) = N (z; 0, IK). (1) Note that we distinguish between hidden parameters x representing unobservable real-world properties and latent variables z carrying information intrinsic to our model. Choosing a Gaussian prior for z poses no additional limitation, as proven by the theory of non-linear independent component analysis (Hyvärinen and Pajunen, 1999). In contrast to standard methodology, we propose to learn the model g(y, z; θ) of the inverse process jointly with a model f(x; θ) approximating the known forward process s(x): [y, z] = f(x; θ) = [fy(x; θ), fz(x; θ)] = g −1(x; θ) with fy(x; θ) ≈ s(x). (2) Functions f and g share the same parameters θ and are implemented by a single invertible neural network. Our experiments show that joint bi-directional training of f and g avoids many complications arising in e.g. cVAEs or Bayesian neural networks, which have to learn the forward process implicitly. The relation f = g−1 is enforced by the invertible network architecture, provided that the nominal and intrinsic dimensions of both sides match. When m ≤M denotes the intrinsic dimension of y, the latent variable z must have dimension K = D −m, assuming that the intrinsic dimension of x equals its nominal dimension D. If the resulting nominal output dimension M +K exceeds D, we augment the input with a vector x0 ∈ RM+K−D of zeros and replace x with the concatenation [x,x0] everywhere. Combining these definitions, our network expresses q(x |y) as q ( x = g(y, z; θ) |y ) = p(z) ∣∣Jx∣∣−1, Jx = det( ∂g(y, z; θ) ∂[y, z] ∣∣∣∣ y,fz(x) ) (3) with Jacobian determinant Jx. When using coupling layers, according to Dinh et al. (2016), computation of Jx is simple, as each transformation has a triangular Jacobian matrix. 3.2 Invertible architecture To create a fully invertible neural network, we follow the architecture proposed by Dinh et al. (2016): The basic unit of this network is a reversible block consisting of two complementary affine coupling layers. Hereby, the block’s input vector u is split into two halves, u1 and u2, which are transformed by an affine function with coefficients exp(si) and ti (i ∈ {1, 2}), using element-wise multiplication ( ) and addition: v1 = u1 exp(s2(u2)) + t2(u2), v2 = u2 exp(s1(v1)) + t1(v1). (4) Given the output v = [v1,v2], these expressions are trivially invertible: u2 = (v2 − t1(v1)) exp(−s1(v1)), u1 = (v1 − t2(u2)) exp(−s2(u2)). (5) Importantly, the mappings si and ti can be arbitrarily complicated functions of v1 and u2 and need not themselves be invertible. In our implementation, they are realized by a succession of several fully connected layers with leaky ReLU activations. A deep invertible network is composed of a sequence of these reversible blocks. To increase model capacity, we apply a few simple extensions to this basic architecture. Firstly, if the dimension D is small, but a complex transformation has to be learned, we find it advantageous to pad both the in- and output of the network with an equal number of zeros. This does not change the intrinsic dimensions of in- and output, but enables the network’s interior layers to embed the data into a larger representation space in a more flexible manner. Secondly, we insert permutation layers between reversible blocks, which shuffle the elements of the subsequent layer’s input in a randomized, but fixed, way. This causes the splits u = [u1,u2] to vary between layers and enhances interaction among the individual variables. Kingma and Dhariwal (2018) use a similar architecture with learned permutations. 3.3 Bi-directional training Invertible networks offer the opportunity to simultaneously optimize for losses on both the inand output domains (Grover et al., 2017), which allows for more effective training. Hereby, we perform forward and backward iterations in an alternating fashion, accumulating gradients from both directions before performing a parameter update. For the forward iteration, we penalize deviations between simulation outcomes yi = s(xi) and network predictions fy(xi) with a loss Ly ( yi, fy(xi) ) . Depending on the problem, Ly can be any supervised loss, e.g. squared loss for regression or cross-entropy for classification. The loss for latent variables penalizes the mismatch between the joint distribution of network outputs q ( y = fy(x), z = fz(x) ) = p(x)/|Jyz| and the product of marginal distributions of simulation outcomes p ( y = s(x) ) = p(x)/|Js| and latents p(z) as Lz ( q(y, z), p(y) p(z) ) . We block the gradients of Lz with respect to y to ensure the resulting updates only affect the predictions of z and do not worsen the predictions of y. Thus, Lz enforces two things: firstly, the generated z must follow the desired normal distribution p(z); secondly, y and z must be independent upon convergence (i.e. p(z |y) = p(z)), and not encode the same information twice. As Lz is implemented by Maximum Mean Discrepancy D (Sec. 3.4), which only requires samples from the distributions to be compared, the Jacobian determinants Jyz and Js do not have to be known explicitly. In appendix Sec. 1, we prove the following theorem: Theorem: If an INN f(x) = [y, z] is trained as proposed, and both the supervised loss Ly=E[(y−fy(x))2] and the unsupervised loss Lz=D ( q(y, z), p(y) p(z) ) reach zero, sampling according to Eq. 1 with g=f−1 returns the true posterior p(x |y∗) for any measurement y∗. Although Ly and Lz are sufficient asymptotically, a small amount of residual dependency between y and z remains after a finite amount of training. This causes q(x |y) to deviate from the true posterior p(x |y). To speed up convergence, we also define a loss Lx on the input side, implemented again by MMD. It matches the distribution of backward predictions q(x) = p ( y = fy(x) ) p ( z = fz(x) ) /|Jx| against the prior data distribution p(x) through Lx ( p(x), q(x) ) . In the appendix, Sec. 1, we prove that Lx is guaranteed to be zero when the forward losses Ly and Lz have converged to zero. Thus, incorporating Lx does not alter the optimum, but improves convergence in practice. Finally, if we use padding on either network side, loss terms are needed to ensure no information is encoded in the additional dimensions. We a) use a squared loss to keep those values close to zero and b) in an additional inverse training pass, overwrite the padding dimensions with noise of the same amplitude and minimize a reconstruction loss, which forces these dimensions to be ignored. 3.4 Maximum mean discrepancy Maximum Mean Discrepancy (MMD) is a kernel-based method for comparison of two probability distributions that are only accessible through samples (Gretton et al., 2012). While a trainable discriminator loss is often preferred for this task in high-dimensional problems, especially in GAN-based image generation, MMD also works well, is easier to use and much cheaper, and leads to more stable training (Tolstikhin et al., 2017). The method requires a kernel function as a design parameter, and we found that kernels with heavier tails than Gaussian are needed to get meaningful gradients for outliers. We achieved best results with the Inverse Multiquadratic k(x,x′) = 1/(1 + ‖(x− x′)/h‖22), reconfirming the Ground truth INN, all losses INN, only Ly + Lz INN, only Lx Figure 2: Viability of INN for a basic inverse problem. The task is to produce the correct (multi-modal) distribution of 2D points x, given only the color label y∗. When trained with all loss terms from Sec. 3.3, the INN output matches ground truth almost exactly (2nd image). The ablations (3rd and 4th image) show that we need Ly and Lz to learn the conditioning correctly, whereas Lx helps us remain faithful to the prior. suggestion from Tolstikhin et al. (2017). Since the magnitude of the MMD depends on the kernel choice, the relative weights of the losses Lx, Ly, Lz are adjusted as hyperparameters, such that their effect is about equal. 4 Experiments We first demonstrate the capabilities of INNs on two well-behaved synthetic problems and then show results for two real-world applications from the fields of medicine and astrophysics. Additional details on the datasets and network architectures are provided in the appendix. 4.1 Artificial data Gaussian mixture model: To test basic viability of INNs for inverse problems, we train them on a standard 8-component Gaussian mixture model p(x). The forward process is very simple: The first four mixture components (clockwise) are assigned label y = red, the next two get label y = blue, and the final two are labeled y = green and y = purple (Fig. 2). The true inverse posteriors p(x |y∗) consist of the mixture components corresponding to the given one-hot-encoded label y∗. We train the INN to directly regress one-hot vectors y using a squared loss Ly, so that we can provide plain one-hot vectors y∗ to the inverse network when sampling p(x |y∗). We observe the following: (i) The INN learns very accurate approximations of the posteriors and does not suffer from mode collapse. (ii) The coupling block architecture does not reduce the network’s representational power – results are similar to standard networks of comparable size (see appendix Sec. 2). (iii) Bidirectional training works best, whereas forward training alone (using only Ly and Lz) captures the conditional relationships properly, but places too much mass in unpopulated regions of x-space. Conversely, pure inverse training (just Lx) learns the correct x-distribution, but loses all conditioning information. Inverse kinematics: For a task with a more complex and continuous forward process, we simulate a simple inverse kinematics problem in 2D space: An articulated arm moves vertically along a rail and rotates at three joints. These four degrees of freedom constitute the parameters x. Their priors are given by a normal distribution, which favors a pose with 180◦ angles and centered origin. The forward process is to calculate the coordinates of the end point y, given a configuration x. The inverse problem asks for the posterior distribution over all possible inputs x that place the arm’s end point at a given y position. An example for a fixed y∗ is shown in Fig. 3, where we compare our INN to a conditional VAE (see appendix Fig. 7 for conceptual comparison of architectures). Adding Inverse Autoregressive Flow (IAF, Kingma et al., 2016) does not improve cVAE performance in this case (see appendix, Table 2). The y∗ chosen in Fig. 3 is a hard example, as it is unlikely under the prior p(x) (Fig. 3, right) and has a strongly bi-modal posterior p(x |y∗). In this case, due to the computationally cheap forward process, we can use approximate Bayesian computation (ABC, see appendix Sec. 7) to sample from the ground truth posterior. Compared to ground truth, we find that both INN and cVAE recover the two symmetric modes well. However, the true end points of x-samples produced by the cVAE tend to miss the target y∗ by a wider margin. This is because the forward process x→ y is only learned implicitly during cVAE training. See appendix for quantitative analysis and details. 4.2 Real-world applications After demonstrating the viability on synthetic data, we apply our method to two real world problems from medicine and astronomy. While we focus on the medical task in the following, the astronomy application is shown in Fig. 5. In medical science, the functional state of biological tissue is of interest for many applications. Tumors, for example, are expected to show changes in oxygen saturation sO2 (Hanahan and Weinberg, 2011). Such changes cannot be measured directly, but influence the reflectance of the tissue, which can be measured by multispectral cameras (Lu and Fei, 2014). Since ground truth data can not be obtained from living tissue, we create training data by simulating observed spectra y from a tissue model x involving sO2 , blood volume fraction vhb, scattering magnitude amie, anisotropy g and tissue layer thickness d (Wirkert et al., 2016). This model constitutes the forward process, and traditional methods to learn point estimates of the inverse (Wirkert et al., 2016; 2017; Claridge and Hidovic-Rowe, 2013) are already sufficiently reliable to be used in clinical trials. However, these methods can not adequately express uncertainty and ambiguity, which may be vital for an accurate diagnosis. Competitors. We train an INN for this problem, along with two ablations (as in Fig. 2), as well as a cVAE with and without IAF (Kingma et al., 2016) and a network using the method of Kendall and Gal (2017), with dropout sampling and additional aleatoric error terms for each parameter. The latter also provides a point-estimate baseline (classical NN) when used without dropout and error terms, which matches the current state-of-the-art results in Wirkert et al. (2017). Finally, we compare to ABC, approximating p(x |y∗) with the 256 samples closest to y∗. Note that with enough samples ABC would produce the true posterior. We performed 50 000 simulations to generate samples for ABC at test time, taking one week on a GPU, but still measure inconsistencies in the posteriors. The learning-based methods are trained within minutes, on a training set of 15 000 samples generated offline. Error measures. We are interested in both the accuracy (point estimates), and the shape of the posterior distributions. For point estimates x̂, i.e. MAP estimates, we compute the deviation from ground-truth values x∗ in terms of the RMSE over test set observations y∗, RMSE = √ Ey∗[‖x̂− x∗‖2]. The scores are reported both for the main parameter of interest sO2 , and the parameter subspace of sO2 , vhb, amie, which we found to be the only recoverable parameters. Furthermore, we check the re-simulation error: We apply the simulation s(x̂) to the point estimate, and compare the simulation outcome to the conditioning y∗. To evaluate the shape of the posteriors, we compute the calibration error for the sampling-based methods, based on the fraction of ground truth inliers αinl. for corresponding α-confidence-region of the marginal posteriors of x. The reported error is the median of |αinl. − α| over all α. All values are computed over 5000 test-set observations y∗, or 1000 observations in the case of re-simulation error. Each posterior uses 4096 samples, or 256 for ABC; all MAP estimates are found using the mean-shift algorithm. Quantitative results. Evaluation results for all methods are presented in Table 1. The INN matches or outperforms other methods in terms of point estimate error. Its accuracy deteriorates slightly when trained without Lx, and entirely when trained without the conditioning losses Ly and Lz, just as in Fig. 2. For our purpose, the calibration error is the most important metric, as it summarizes the correctness of the whole posterior distribution in one number (see appendix Fig. 11). Here, the INN has a big lead over cVAE(-IAF) and Dropout, and even over ABC due to the low ABC sample count. Qualitative results. Fig. 4 shows generated parameter distributions for one fixed measurement y∗, comparing the INN to cVAE-IAF, Dropout sampling and ABC. The three former methods use a sample count of 160 000 to produce smooth curves. Due to the sparse posteri- ors of 256 samples in the case of ABC, kernel density estimation was applied to its results, with a bandwidth of σ = 0.1. The results produced by the INN provide relevant insights: First, we find that the posteriors for layer thickness d and anisotropy g match the shape of their priors, i.e. y∗ holds no information about these parameters – they are unrecoverable. This finding is supported by the ABC results, whereas the other two methods misleadingly suggest a roughly Gaussian posterior. Second, we find that the sampled distributions for the blood volume fraction vhb and scattering amplitude amie are strongly correlated (rightmost plot). This phenomenon is not an analysis artifact, but has a sound physical explanation: As blood volume fraction increases, more light is absorbed inside the tissue. For the sensor to record the same intensities y∗ as before, scattering must be increased accordingly. In Fig. 10 in the appendix, we show how the INN is applied to real multispectral images. 5 Conclusion We have shown that the full posterior of an inverse problem can be estimated with invertible networks, both theoretically and practically on problems from medicine and astrophysics. We share the excitement of the application experts to develop INNs as a generic tool, helping them to better interpret their data and models, and to improve experimental setups. As a side effect, our results confirm the findings of others that the restriction to coupling layers does not noticeably reduce the expressive power of the network. In summary, we see the following fundamental advantages of our INN-based method compared to alternative approaches: Firstly, one can learn the forward process and obtain the (more complicated) inverse process ‘for free’, as opposed to e.g. cGANs, which focus on the inverse and learn the forward process only implicitly. Secondly, the learned posteriors are not restricted to a particular parametric form, in contrast to classical variational methods. Lastly, in comparison to ABC and related Bayesian methods, the generation of the INN posteriors is computationally very cheap. In future work, we plan to systematically analyze the properties of different invertible architectures, as well as more flexible models utilizing cycle losses, in the context of representative inverse problem. We are also interested in how our method can be scaled up to higher dimensionalities, where MMD becomes less effective. Acknowledgments LA received funding by the Federal Ministry of Education and Research of Germany, project ‘High Performance Deep Learning Framework’ (No 01IH17002). JK, CR and UK received financial support from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No 647769). SW and LMH received funding from the European Research Council (ERC) starting grant COMBIOSCOPY (637960). EWP, DR, and RSK acknowledge support by Collaborative Research Centre (SFB 881) ‘The Milky Way System’ (subprojects B1, B2 and B8), the Priority Program SPP 1573 ‘Physics of the Interstellar Medium’ (grant numbers KL 1358/18.1, KL 1358/19.2 and GL 668/2-1) and the European Research Council in the ERC Advanced Grant STARLIGHT (project no. 339177) 1 Proof of correctness of generated posteriors Lemma: If some bijective function f : x→ z transforms a probability density pX(x) to pZ(z), then the inverse function f−1 transforms pZ(z) back to pX(x). Proof: We denote the probability density obtained through the reverse transformation as p∗X(x). Therefore, we have to show that p ∗ X(x) = pX(x). For the forward direction, via the change-of-variables formula, we have pZ(z) = pX ( x = f−1(z) ) ∣∣det[∂z(f−1)]∣∣ (6) with the Jacobian ∂zf−1 ≡ ∂f−1i /∂zj . For the reverse transformation, we have p∗X(x) = pZ ( z = f(x) ) |det[∂xf ]| . (7) We can substitute pZ from Eq. 6 and obtain p∗X(x) = pX ( x = f−1(f(x)) ) ∣∣∣det [(∂z(f−1))(∂xf)]∣∣∣ (8) = pX(x) ∣∣∣det [(∂zf−1)(∂xf)]∣∣∣ (9) = pX(x) |det[I]| = pX(x). (10) In Eq. 9, the Jacobians cancel out due to the inverse function theorem, i.e. the Jacobian ∂z(f −1) is the matrix inverse of ∂xf . Theorem: If an INN f(x) = [y, z] is trained as proposed, and both the supervised loss Ly=E[(y−fy(x))2] and the unsupervised loss Lz=D ( q(y, z), p(y) p(z) ) reach zero, sampling according to Eq. 1 with g=f−1 returns the true posterior p(x |y∗) for any measurement y∗. Proof: We denote the chosen latent distribution as pZ(z), the distribution of observations as pY (y), and the joint distribution of network outputs as q(y, z). As shown by Gretton et al. (2012), if the MMD loss converges to 0, the network outputs follow the prescribed distribution: Lz = 0 ⇐⇒ q(y, z) = pY (y) pZ(z) (11) Suppose we take a posterior conditioned on a fixed y∗, i.e. p(x |y∗), and transform it using the forward pass of our perfectly converged INN. From this we obtain an output distribution q∗(y, z). Because Ly = 0, we know that the output distribution of y (marginalized over z) must be q∗(y) = δ(y−y∗). Also, because of the independence between z and y in the output, the distribution of z-outputs is still q∗(z) = pZ(z). So the joint distribution of outputs is q∗(y, z) = δ(y − y∗) pZ(z) (12) When we invert the network, and repeatedly input y∗ while sampling z ∼ pZ(z), this is the same as sampling [y, z] from the q∗(y, z) above. Using the Lemma from above, we know that the inverted network will output samples from p(x |y∗). Corollary: If the conditions of the theorem above are fulfilled, the unsupervised reverse loss Lx = D ( q(x), pX(x) ) between the marginalized outputs of the inverted network, q(x), and the prior data distribution, pX(x), will also be 0. This justifies using the loss on the prior to speed up convergence, without altering the final results. Proof: Due to the theorem, the estimated posteriors generated by the INN are correct, i.e. q(x |y∗) = p(x |y∗). If they are marginalized over observations y∗ from the training data, then q(x) will be equal to pX(x) by definition. As shown by Gretton et al. (2012), this is equivalent to Lx = 0. 2 Artificial data – Gaussian mixture In Sec. 4.1, we demonstrate that the proposed INN can approximate the true posteriors very well and is not hindered by the required coupling block architecture. Here we show how some existing methods do on the same task, using neural networks of similar size as the INN. cGAN Training a conditional GAN of network size comparable to the INN (counting only the generator) and only two noise dimensions turned out to be challenging. Even with additional pre-training to avoid mode collapse, the individual modes belonging to one label are reduced to nearly one-dimensional structures. Larger cGAN In order to match the results of the INN, we trained a more complex cGAN with 2M parameters instead of the previous 10K, and a latent dimension of 128, instead of 2. To prevent mode collapse, we introduced an additional regularization: an extra loss term forces the variance of generator outputs to match the variance of the training data prior. With these changes, the cGAN can be seen to recover the posteriors reasonably well. Generator + MMD Another option is to keep the cGAN generator the same size as our INN, but replace the discriminator with an MMD loss (cf. Sec. 3.4). This loss receives a concatenation of the generator output x and the label y it was supplied with, and compares these batch-wise with the concatenation of ground truth (x,y)-pairs. Note that in contrast to this, the corresponding MMD loss of the INN only receives x, and no information about y. For this small toy problem, we find that the hand-crafted MMD loss dramatically improves results compared to the smaller learned discriminator. cVAE We also compare to a conditional Variational Autoencoder of same total size as the INN. There is some similarity between the training setup of our method (Fig. 7, right) and that of cVAE (Fig. 7, left), as the forward and inverse pass of an INN can also be seen as an encoder-decoder pair. The main differences are that the cVAE learns the relationship x → y only indirectly, since there is no explicit loss for it, and that the INN requires no reconstruction loss, since it is bijective by construction. cVAE-IAF We adapt the cVAE to use Inverse Autoregressive Flow (Kingma et al., 2016) between the encoder and decoder. On the Gaussian mixture toy problem, the trained cVAE-IAF generates correct posteriors on par with our INN (see Fig. 6). Dropout sampling The method of dropout sampling with learned error terms is by construction not able to produce multi-modal outputs, and therefore fails on this task. 2.1 Latent space analysis To analyze how the latent space of our INN is structured for this task, we choose a fixed label y∗ and sample z from a dense grid. For each z, we compute x through our inverse network and colorize this point in latent (z) space according to the distance from the closest mode in x-space. We can see that our network learns to shape the latent space such that each mode receives the expected fraction of samples (Fig. 8). 3 Artificial data – inverse kinematics A short video demonstrating the structure of our INN’s latent space can be found under https://gfycat.com/SoggyCleanHog, for a slightly different arm setup. The dataset is constucted using gaussian priors xi ∼ N (0, σi), with σ1 = 0.25 and σ2 = σ3 = σ4 = 0.5 ∧ = 28.65◦. The forward process is given by y1 = x1 + l1 sin(x2) + l2 sin(x3 − x2) + l3 sin(x4 − x2 − x3) (13) y2 = l1 cos(x2) + l2 cos(x3 − x2) + l3 cos(x4 − x2 − x3) (14) with the arm lenghts l1 = 0.5, l2 = 0.5, l3 = 1.0. To judge the quality of posteriors, we quantify both the re-simulation error and the calibration error over the test set, as in Sec. 4.2 of the paper. Because of the cheap simulation, we average the re-simulation error over the whole posterior, and not only the MAP estimate. In Table 2, we find that the INN has a clear advantage in both metrics, confirming the observations from Fig. 3. 4 Multispectral measurements of biological tissue The following figure shows the results when the INN trained in Sec. 4.2 is applied pixel-wise to multispectral endoscopic footage. In addition to estimating the oxygenation sO2 , we measure the uncertainty in the form of the 68% confidence interval. a) Median sO2 0.40 0.45 0.50 0.55 0.60 0.65 0.70 b) Est. uncertainty 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 c) RGB image 5 Star cluster spectral data Results for one specific y are shown in Fig. 5. Note that our network recovers a decidedly multimodal distribution of x that visibly deviates from the prior p(x). Note also the strong correlations in the system. For example, the measurements y∗ investigated may correspond to a young cluster with large expansion velocity, or to an older system that expands slowly. Finding these ambiguities in p(x |y∗) and identifying degeneracies in the underlying model are pivotal aspects of astrophysical research, and a method to effectively approximate full posterior distributions has the potential to lead to a major breakthrough in this field. 6 Calibration curve for tissue parameter estimation In Sec. 4.2, we report the median calibration error for each method. The following figure plots the calibration error, qinliers− q, against the level of confidence q. Negative values mean that a model is overconfident, while positive values say the opposite. 7 Approximate Bayesian computation (ABC) While there is a whole field of research concerned with ABC approaches and their efficiencyaccuracy tradeoffs, our use of the method here is limited to the essential principle of rejection sampling. When we require N samples of x from the posterior p(x |y∗) conditioned on some y∗, there are two basic ways to obtain them: Threshold: We set an acceptance threshold , repeatedly draw x-samples from the prior, compute the corresponding y-values (via simulation) and keep those where dist(y,y∗) < , until we have accepted N samples. The smaller we want , the more simulations have to be run, which is why we use this approach only for the experiment in Sec. 4.1, where we can afford to run the forward process millions or even billions of times. Quantile: Alternatively, we choose what quantile q of samples shall be accepted, and then run exactly N/q simulations. All sampled pairs (x,y) are sorted by dist(y,y∗) and the N closest to y∗ form the posterior. This allows for a more predictable runtime when the simulations are costly, as in the medical application in Sec. 4.2 where q = 0.005. 8 Details of datasets and network architectures Table 3 summarizes the datasets used throughout the paper. The architecture details are given in the following. 8.1 Artificial data – Gaussian mixture INN: 3 invertible blocks, 3 fully connected layers per affine coefficient function with ReLU activation functions in the intermediate layers, zero padding to a nominal dimension of 16, Adam optimizer, decaying learning rate from 10−3 to 10−5, batch size 200. The inverse multiquadratic kernel was used for MMD, with h = 0.2 in both x- and z-space. Dropout sampling: 6 fully connected layers with ReLU activations, Adam optimizer, learning rate decay from 10−3 to 10−5, batch size 200, dropout probability p = 0.2. cGAN: 6 fully connected layers for the generator and 8 for the discriminator, all with leaky ReLU activations. Adam was used for the generator, SGD for the discriminator, learning rates decaying from 2 · 10−3 to 2 · 10−6, batch size 256. Initially 100 iterations training with L = 1N ∑ i ‖g(zi, yi)− xi‖22, to separate the differently labeled modes, followed by pure GAN training. Larger cGAN: 2 fully connected layers with 1024 neurons each for discriminator and generator, batch size 512, Adam optimizer with learning rate 8 · 10−4 for the generator, SGD with learning rate 1.2 ·10−3 and momentum 0.05 for the discriminator, 1.6 ·10−3 weight decay for both, 0.25 dropout probabiliy for the generator at training and test time. Equal weighting of discriminator loss and penalty of output variance L = (Vari[g(zi, yi)]−Vari[xi])2 Generator with MMD: 8 fully connected layers with leaky ReLU activations, Adam optimizer, decaying learning rate from 10−3 to 10−6, batch size 256. Inverse multiquadratic kernel, h = 0.5. cVAE: 3 fully connected layers each for encoder and decoder, ReLU activations, learning rate 2 · 10−2, decay to 2.5 · 10−5, Adam optimizer, batch size 25, reconstruction loss weighted 50:1 versus KL divergence loss. 8.2 Artificial data – inverse kinematics INN: 6 affine coupling blocks with 3 fully connected layers each and leaky ReLU activations. Adam optimizer, decaying learning rate from 10−2 to 10−4, multiquadratic kernel with h = 1.2. cVAE: 4 fully connected layers each for encoder and decoder, ReLU activations, learning rate 5 ·10−3, decay to 1.6 ·10−5, Adam optimizer, batch size 250, reconstruction loss weighted 15:1 versus KL divergence loss. 8.3 Functional parameter estimation from multispectral tissue images INN: 3 invertible blocks, 4 fully connected layers per affine coefficient function with leaky ReLUs in the intermediate layers, zero padding to double the original width. Adam optimizer, learning rate decay from 2 · 10−3 to 2 · 10−5, batch size 200. Inverse multiquadratic kernel with h = 1, weighted MMD terms by observation distance with decaying γ = 0.2 to 0. Dropout sampling/point estimate: 8 fully connected layers, ReLU activations, Adam with decaying learning rate from 10−2 to 10−5, batch size 100, dropout probability p = 0.2. cVAE: 4 fully connected layers each for encoder and decoder, ReLU activations, learning rate 10−3, decay to 3.2 · 10−6, Adam optimizer, batch size 25, reconstruction loss weighted 103:1 versus KL divergence loss. 8.4 Impact of star clusters on the dynamical evolution of the galactic gas INN: 5 invertible blocks, 4 fully connected layers per affine coefficient function with leaky ReLUs in the intermediate layers, no additional zero padding. Adam optimizer with decaying learning rate from 2 · 10−3 to 1.5 · 10−6, batch size 500. Kernel for latent space: k(z, z′) = exp(−‖(z − z′)/h‖2) with h = 7.1. Kernel for x-space: k(x,x′) = −‖x − x′‖1/41/2. Due to the complex nature of the prior distributions, this was the kernel found to capture the details correctly, whereas the peak of the inverse multiquadratic kernel was too broad for this purpose.
1. What is the main contribution of the paper regarding posterior inference? 2. How does the proposed approach differ from other methods of approximate Bayesian computation? 3. Can you explain the handling of discrete output in the Mixture of Gaussians experiment? 4. How does the method perform compared to other approaches in real-world applications? 5. What are the strengths and weaknesses of the experimental section?
Review
Review The authors propose in this paper an approach for learning models with tractable approximate posterior inference. The paper is well motivated (fast and accurate posterior inference) and the construction of the solutions (invertible architecture, appending vectors to input and output, choice of cost function) well described. From my understanding, it seems this method is also to be compatible with other methods of approximate Bayesian Computation (ABC). Concerning the experimental section: - The Mixture of Gaussians experiment is a good illustration of how the choice of cost functions influences the solution. However, I do not understand how are the *discrete* output y is handled. Is it indeed a discrete output (problem with lack of differentiability)? Softwax probability? Other modelling choice? - The inverse kinematics is an interesting illustration of the potential advantage of this method over conditional VAE and how close it is to ABC which can be reasonably computed for this problem. - For the medical application, INN outperforms other methods (except sometimes for ABC, which is far more expensive, or direct predictor, which doesn’t provide uncertainty estimates) over some metrics such as the error on parameters recovery (Table 1), calibration error, and does indeed have a approximate posterior which seems to correspond to the ABC solution better. I’m not sure I understand what we are supposed to learn from the astrophysics experiments. The method proposed and the general problem it aims at tackling seem interesting enough, the toy experiments demonstrates well the advantage of the method. However, the real-world experiments are not necessarily the easiest to read. EDIT: the concerns were mostly addressed in the revision.
ICLR
Title Analyzing Inverse Problems with Invertible Neural Networks Abstract For many applications, in particular in natural science, the task is to determine hidden system parameters from a set of measurements. Often, the forward process from parameterto measurement-space is well-defined, whereas the inverse problem is ambiguous: multiple parameter sets can result in the same measurement. To fully characterize this ambiguity, the full posterior parameter distribution, conditioned on an observed measurement, has to be determined. We argue that a particular class of neural networks is well suited for this task – so-called Invertible Neural Networks (INNs). Unlike classical neural networks, which attempt to solve the ambiguous inverse problem directly, INNs focus on learning the forward process, using additional latent output variables to capture the information otherwise lost. Due to invertibility, a model of the corresponding inverse process is learned implicitly. Given a specific measurement and the distribution of the latent variables, the inverse pass of the INN provides the full posterior over parameter space. We prove theoretically and verify experimentally, on artificial data and real-world problems from medicine and astrophysics, that INNs are a powerful analysis tool to find multi-modalities in parameter space, uncover parameter correlations, and identify unrecoverable parameters. 1 Introduction When analyzing complex physical systems, a common problem is that the system parameters of interest cannot be measured directly. For many of these systems, scientists have developed sophisticated theories on how measurable quantities y arise from the hidden parameters x. We will call such mappings the forward process. However, the inverse process is required to infer the hidden states of a system from measurements. Unfortunately, the inverse is often both intractable and ill-posed, since crucial information is lost in the forward process. To fully assess the diversity of possible inverse solutions for a given measurement, an inverse solver should be able to estimate the complete posterior of the parameters, conditioned on an observation. This makes it possible to quantify uncertainty, reveal multi-modal distributions, and identify degenerate and unrecoverable parameters – all highly relevant for applications in natural science. In this paper, we ask if invertible neural networks (INNs) are a suitable model class for this task. INNs are characterized by three properties: (i) The mapping from inputs to outputs is bijective, i.e. its inverse exists, (ii) both forward and inverse mapping are efficiently computable, and (iii) both mappings have a tractable Jacobian, which allows explicit computation of posterior probabilities. Networks that are invertible by construction offer a unique opportunity: We can train them on the well-understood forward process x→ y and get the inverse y→ x for free by running them backwards at prediction time. To counteract the inherent information loss of the forward process, we introduce additional latent output variables z, which capture the information about x that is not contained in y. Thus, our INN learns to associate hidden parameter values x with unique pairs [y, z] of measurements and latent variables. Forward training optimizes the mapping [y, z] = f(x) and implicitly determines its inverse x = f−1(y, z) = g(y, z). Additionally, we make sure that the density p(z) of the latent variables is shaped as a Gaussian distribution. Thus, the INN represents the desired posterior p(x |y) by a deterministic function x = g(y, z) that transforms (“pushes”) the known distribution p(z) to x-space, conditional on y. Compared to standard approaches (see Fig. 1, left), INNs circumvent a fundamental difficulty of learning inverse problems: Defining a sensible supervised loss for direct posterior learning is problematic since it requires prior knowledge about that posterior’s behavior, constituting a kind of hen-end-egg problem. If the loss does not match the possibly complicated (e.g. multimodal) shape of the posterior, learning will converge to incorrect or misleading solutions. Since the forward process is usually much simpler and better understood, forward training diminishes this difficulty. Specifically, we make the following contributions: • We show that the full posterior of an inverse problem can be estimated with invertible networks, both theoretically in the asymptotic limit of zero loss, and practically on synthetic and real-world data from astrophysics and medicine. • The architectural restrictions imposed by invertibility do not seem to have detrimental effects on our network’s representational power. • While forward training is sufficient in the asymptotic limit, we find that a combination with unsupervised backward training improves results on finite training sets. • In our applications, our approach to learning the posterior compares favourably to approximate Bayesian computation (ABC) and conditional VAEs. This enables identifying unrecoverable parameters, parameter correlations and multimodalities. 2 Related work Modeling the conditional posterior of an inverse process is a classical statistical task that can in principle be solved by Bayesian methods. Unfortunately, exact Bayesian treatment of real-world problems is usually intractable. The most common (but expensive) solution is to resort to sampling, typically by a variant of Markov Chain Monte Carlo (Robert and Casella, 2004; Gamerman and Lopes, 2006). If a model y = s(x) for the forward process is available, approximate Bayesian computation (ABC) is often preferred, which embeds the forward model in a rejection sampling scheme for the posterior p(x|y) (Sunnåker et al., 2013; Lintusaari et al., 2017; Wilkinson, 2013). Variational methods offer a more efficient alternative, approximating the posterior by an optimally chosen member of a tractable distribution family (Blei et al., 2017). Neural networks can be trained to predict accurate sufficient statistics for parametric posteriors (Papamakarios and Murray, 2016; Siddharth et al., 2017), or can be designed to learn a mean-field distribution for the network’s weights via dropout variational inference (Gal and Ghahramani, 2015; Kingma et al., 2015). Both ideas can be combined (Kendall and Gal, 2017) to differentiate between data-related and model-related uncertainty. However, the restriction to limited distribution families fails if the true distribution is too complex (e.g. when it requires multiple modes to represent ambiguous or degenerate solutions) and essentially counters the ability of neural networks to act as universal approximators. Conditional GANs (cGANs; Mirza and Osindero, 2014; Isola et al., 2017) overcome this restriction in principle, but often lack satisfactory diversity in practice (Zhu et al., 2017b). For our tasks, conditional variational autoencoders (cVAEs; Sohn et al., 2015) perform better than cGANs, and are also conceptually closer to our approach (see appendix Sec. 2), and hence serve as a baseline in our experiments. Generative modeling via learning of a non-linear transformation between the data distribution and a simple prior distribution (Deco and Brauer, 1995; Hyvärinen and Pajunen, 1999) has the potential to solve these problems. Today, this approach is often formulated as a normalizing flow (Tabak et al., 2010; Tabak and Turner, 2013), which gradually transforms a normal density into the desired data density and relies on bijectivity to ensure the mapping’s validity. These ideas were applied to neural networks by Deco and Brauer (1995); Rippel and Adams (2013); Rezende and Mohamed (2015) and refined by Tomczak and Welling (2016); Berg et al. (2018); Trippe and Turner (2018). Today, the most common realizations use auto-regressive flows, where the density is decomposed according to the Bayesian chain rule (Kingma et al., 2016; Huang et al., 2018; Germain et al., 2015; Papamakarios et al., 2017; Oord et al., 2016; Kolesnikov and Lampert, 2017; Salimans et al., 2017; Uria et al., 2016). These networks successfully learned unconditional generative distributions for artificial data and standard image sets (e.g. MNIST, CelebA, LSUN bedrooms), and some encouraging results for conditional modeling exist as well (Oord et al., 2016; Salimans et al., 2017; Papamakarios et al., 2017; Uria et al., 2016). These normalizing flows possess property (i) of an INN, and are usually designed to fulfill requirement (iii) as well. In other words, flow-based networks are invertible in principle, but the actual computation of their inverse is too costly to be practical, i.e. INN property (ii) is not fulfilled. This precludes the possibility of bi-directional or cyclic training, which has been shown to be very beneficial in generative adversarial nets and auto-encoders (Zhu et al., 2017a; Dumoulin et al., 2016; Donahue et al., 2017; Teng et al., 2018). In fact, optimization for cycle consistency forces such models to converge to invertible architectures, making fully invertible networks a natural choice. True INNs can be built using coupling layers, as introduced in the NICE (Dinh et al., 2014) and RealNVP (Dinh et al., 2016) architectures. Despite their simple design and training, these networks were rarely studied: Gomez et al. (2017) used a NICE-like design as a memory-efficient alternative to residual networks, Jacobsen et al. (2018) demonstrated that the lack of information reduction from input to representation does not cause overfitting, and Schirrmeister et al. (2018) trained such a network as an adverserial autoencoder. Danihelka et al. (2017) showed that minimization of an adversarial loss is superior to maximum likelihood training in RealNVPs, whereas the Flow-GAN of Grover et al. (2017) performs even better using bidirectional training, a combination of maximum likelihood and adverserial loss. The Glow architecture by Kingma and Dhariwal (2018) incorporates invertible 1x1 convolutions into RealNVPs to achieve impressive image manipulations. This line of research inspired us to extend RealNVPs for the task of computing posteriors in real-world inverse problems from natural and life sciences. 3 Methods 3.1 Problem specification We consider a common scenario in natural and life sciences: Researchers are interested in a set of variables x ∈ RD describing some phenomenon of interest, but only variables y ∈ RM can actually be observed, for which the theory of the respective research field provides a model y = s(x) for the forward process. Since the transformation from x to y incurs an information loss, the intrinsic dimension m of y is in general smaller than D, even if the nominal dimensions satisfy M > D. Hence we want to express the inverse model as a conditional probability p(x |y), but its mathematical derivation from the forward model is intractable in the applications we are going to address. We aim at approximating p(x |y) by a tractable model q(x |y), taking advantage of the possibility to create an arbitrary amount of training data {(xi,yi)}Ni=1 from the known forward model s(x) and a suitable prior p(x). While this would allow for training of a standard regression model, we want to approximate the full posterior probability. To this end, we introduce a latent random variable z ∈ RK drawn from a multi-variate standard normal distribution and reparametrize q(x |y) in terms of a deterministic function g of y and z, represented by a neural network with parameters θ: x = g(y, z; θ) with z ∼ p(z) = N (z; 0, IK). (1) Note that we distinguish between hidden parameters x representing unobservable real-world properties and latent variables z carrying information intrinsic to our model. Choosing a Gaussian prior for z poses no additional limitation, as proven by the theory of non-linear independent component analysis (Hyvärinen and Pajunen, 1999). In contrast to standard methodology, we propose to learn the model g(y, z; θ) of the inverse process jointly with a model f(x; θ) approximating the known forward process s(x): [y, z] = f(x; θ) = [fy(x; θ), fz(x; θ)] = g −1(x; θ) with fy(x; θ) ≈ s(x). (2) Functions f and g share the same parameters θ and are implemented by a single invertible neural network. Our experiments show that joint bi-directional training of f and g avoids many complications arising in e.g. cVAEs or Bayesian neural networks, which have to learn the forward process implicitly. The relation f = g−1 is enforced by the invertible network architecture, provided that the nominal and intrinsic dimensions of both sides match. When m ≤M denotes the intrinsic dimension of y, the latent variable z must have dimension K = D −m, assuming that the intrinsic dimension of x equals its nominal dimension D. If the resulting nominal output dimension M +K exceeds D, we augment the input with a vector x0 ∈ RM+K−D of zeros and replace x with the concatenation [x,x0] everywhere. Combining these definitions, our network expresses q(x |y) as q ( x = g(y, z; θ) |y ) = p(z) ∣∣Jx∣∣−1, Jx = det( ∂g(y, z; θ) ∂[y, z] ∣∣∣∣ y,fz(x) ) (3) with Jacobian determinant Jx. When using coupling layers, according to Dinh et al. (2016), computation of Jx is simple, as each transformation has a triangular Jacobian matrix. 3.2 Invertible architecture To create a fully invertible neural network, we follow the architecture proposed by Dinh et al. (2016): The basic unit of this network is a reversible block consisting of two complementary affine coupling layers. Hereby, the block’s input vector u is split into two halves, u1 and u2, which are transformed by an affine function with coefficients exp(si) and ti (i ∈ {1, 2}), using element-wise multiplication ( ) and addition: v1 = u1 exp(s2(u2)) + t2(u2), v2 = u2 exp(s1(v1)) + t1(v1). (4) Given the output v = [v1,v2], these expressions are trivially invertible: u2 = (v2 − t1(v1)) exp(−s1(v1)), u1 = (v1 − t2(u2)) exp(−s2(u2)). (5) Importantly, the mappings si and ti can be arbitrarily complicated functions of v1 and u2 and need not themselves be invertible. In our implementation, they are realized by a succession of several fully connected layers with leaky ReLU activations. A deep invertible network is composed of a sequence of these reversible blocks. To increase model capacity, we apply a few simple extensions to this basic architecture. Firstly, if the dimension D is small, but a complex transformation has to be learned, we find it advantageous to pad both the in- and output of the network with an equal number of zeros. This does not change the intrinsic dimensions of in- and output, but enables the network’s interior layers to embed the data into a larger representation space in a more flexible manner. Secondly, we insert permutation layers between reversible blocks, which shuffle the elements of the subsequent layer’s input in a randomized, but fixed, way. This causes the splits u = [u1,u2] to vary between layers and enhances interaction among the individual variables. Kingma and Dhariwal (2018) use a similar architecture with learned permutations. 3.3 Bi-directional training Invertible networks offer the opportunity to simultaneously optimize for losses on both the inand output domains (Grover et al., 2017), which allows for more effective training. Hereby, we perform forward and backward iterations in an alternating fashion, accumulating gradients from both directions before performing a parameter update. For the forward iteration, we penalize deviations between simulation outcomes yi = s(xi) and network predictions fy(xi) with a loss Ly ( yi, fy(xi) ) . Depending on the problem, Ly can be any supervised loss, e.g. squared loss for regression or cross-entropy for classification. The loss for latent variables penalizes the mismatch between the joint distribution of network outputs q ( y = fy(x), z = fz(x) ) = p(x)/|Jyz| and the product of marginal distributions of simulation outcomes p ( y = s(x) ) = p(x)/|Js| and latents p(z) as Lz ( q(y, z), p(y) p(z) ) . We block the gradients of Lz with respect to y to ensure the resulting updates only affect the predictions of z and do not worsen the predictions of y. Thus, Lz enforces two things: firstly, the generated z must follow the desired normal distribution p(z); secondly, y and z must be independent upon convergence (i.e. p(z |y) = p(z)), and not encode the same information twice. As Lz is implemented by Maximum Mean Discrepancy D (Sec. 3.4), which only requires samples from the distributions to be compared, the Jacobian determinants Jyz and Js do not have to be known explicitly. In appendix Sec. 1, we prove the following theorem: Theorem: If an INN f(x) = [y, z] is trained as proposed, and both the supervised loss Ly=E[(y−fy(x))2] and the unsupervised loss Lz=D ( q(y, z), p(y) p(z) ) reach zero, sampling according to Eq. 1 with g=f−1 returns the true posterior p(x |y∗) for any measurement y∗. Although Ly and Lz are sufficient asymptotically, a small amount of residual dependency between y and z remains after a finite amount of training. This causes q(x |y) to deviate from the true posterior p(x |y). To speed up convergence, we also define a loss Lx on the input side, implemented again by MMD. It matches the distribution of backward predictions q(x) = p ( y = fy(x) ) p ( z = fz(x) ) /|Jx| against the prior data distribution p(x) through Lx ( p(x), q(x) ) . In the appendix, Sec. 1, we prove that Lx is guaranteed to be zero when the forward losses Ly and Lz have converged to zero. Thus, incorporating Lx does not alter the optimum, but improves convergence in practice. Finally, if we use padding on either network side, loss terms are needed to ensure no information is encoded in the additional dimensions. We a) use a squared loss to keep those values close to zero and b) in an additional inverse training pass, overwrite the padding dimensions with noise of the same amplitude and minimize a reconstruction loss, which forces these dimensions to be ignored. 3.4 Maximum mean discrepancy Maximum Mean Discrepancy (MMD) is a kernel-based method for comparison of two probability distributions that are only accessible through samples (Gretton et al., 2012). While a trainable discriminator loss is often preferred for this task in high-dimensional problems, especially in GAN-based image generation, MMD also works well, is easier to use and much cheaper, and leads to more stable training (Tolstikhin et al., 2017). The method requires a kernel function as a design parameter, and we found that kernels with heavier tails than Gaussian are needed to get meaningful gradients for outliers. We achieved best results with the Inverse Multiquadratic k(x,x′) = 1/(1 + ‖(x− x′)/h‖22), reconfirming the Ground truth INN, all losses INN, only Ly + Lz INN, only Lx Figure 2: Viability of INN for a basic inverse problem. The task is to produce the correct (multi-modal) distribution of 2D points x, given only the color label y∗. When trained with all loss terms from Sec. 3.3, the INN output matches ground truth almost exactly (2nd image). The ablations (3rd and 4th image) show that we need Ly and Lz to learn the conditioning correctly, whereas Lx helps us remain faithful to the prior. suggestion from Tolstikhin et al. (2017). Since the magnitude of the MMD depends on the kernel choice, the relative weights of the losses Lx, Ly, Lz are adjusted as hyperparameters, such that their effect is about equal. 4 Experiments We first demonstrate the capabilities of INNs on two well-behaved synthetic problems and then show results for two real-world applications from the fields of medicine and astrophysics. Additional details on the datasets and network architectures are provided in the appendix. 4.1 Artificial data Gaussian mixture model: To test basic viability of INNs for inverse problems, we train them on a standard 8-component Gaussian mixture model p(x). The forward process is very simple: The first four mixture components (clockwise) are assigned label y = red, the next two get label y = blue, and the final two are labeled y = green and y = purple (Fig. 2). The true inverse posteriors p(x |y∗) consist of the mixture components corresponding to the given one-hot-encoded label y∗. We train the INN to directly regress one-hot vectors y using a squared loss Ly, so that we can provide plain one-hot vectors y∗ to the inverse network when sampling p(x |y∗). We observe the following: (i) The INN learns very accurate approximations of the posteriors and does not suffer from mode collapse. (ii) The coupling block architecture does not reduce the network’s representational power – results are similar to standard networks of comparable size (see appendix Sec. 2). (iii) Bidirectional training works best, whereas forward training alone (using only Ly and Lz) captures the conditional relationships properly, but places too much mass in unpopulated regions of x-space. Conversely, pure inverse training (just Lx) learns the correct x-distribution, but loses all conditioning information. Inverse kinematics: For a task with a more complex and continuous forward process, we simulate a simple inverse kinematics problem in 2D space: An articulated arm moves vertically along a rail and rotates at three joints. These four degrees of freedom constitute the parameters x. Their priors are given by a normal distribution, which favors a pose with 180◦ angles and centered origin. The forward process is to calculate the coordinates of the end point y, given a configuration x. The inverse problem asks for the posterior distribution over all possible inputs x that place the arm’s end point at a given y position. An example for a fixed y∗ is shown in Fig. 3, where we compare our INN to a conditional VAE (see appendix Fig. 7 for conceptual comparison of architectures). Adding Inverse Autoregressive Flow (IAF, Kingma et al., 2016) does not improve cVAE performance in this case (see appendix, Table 2). The y∗ chosen in Fig. 3 is a hard example, as it is unlikely under the prior p(x) (Fig. 3, right) and has a strongly bi-modal posterior p(x |y∗). In this case, due to the computationally cheap forward process, we can use approximate Bayesian computation (ABC, see appendix Sec. 7) to sample from the ground truth posterior. Compared to ground truth, we find that both INN and cVAE recover the two symmetric modes well. However, the true end points of x-samples produced by the cVAE tend to miss the target y∗ by a wider margin. This is because the forward process x→ y is only learned implicitly during cVAE training. See appendix for quantitative analysis and details. 4.2 Real-world applications After demonstrating the viability on synthetic data, we apply our method to two real world problems from medicine and astronomy. While we focus on the medical task in the following, the astronomy application is shown in Fig. 5. In medical science, the functional state of biological tissue is of interest for many applications. Tumors, for example, are expected to show changes in oxygen saturation sO2 (Hanahan and Weinberg, 2011). Such changes cannot be measured directly, but influence the reflectance of the tissue, which can be measured by multispectral cameras (Lu and Fei, 2014). Since ground truth data can not be obtained from living tissue, we create training data by simulating observed spectra y from a tissue model x involving sO2 , blood volume fraction vhb, scattering magnitude amie, anisotropy g and tissue layer thickness d (Wirkert et al., 2016). This model constitutes the forward process, and traditional methods to learn point estimates of the inverse (Wirkert et al., 2016; 2017; Claridge and Hidovic-Rowe, 2013) are already sufficiently reliable to be used in clinical trials. However, these methods can not adequately express uncertainty and ambiguity, which may be vital for an accurate diagnosis. Competitors. We train an INN for this problem, along with two ablations (as in Fig. 2), as well as a cVAE with and without IAF (Kingma et al., 2016) and a network using the method of Kendall and Gal (2017), with dropout sampling and additional aleatoric error terms for each parameter. The latter also provides a point-estimate baseline (classical NN) when used without dropout and error terms, which matches the current state-of-the-art results in Wirkert et al. (2017). Finally, we compare to ABC, approximating p(x |y∗) with the 256 samples closest to y∗. Note that with enough samples ABC would produce the true posterior. We performed 50 000 simulations to generate samples for ABC at test time, taking one week on a GPU, but still measure inconsistencies in the posteriors. The learning-based methods are trained within minutes, on a training set of 15 000 samples generated offline. Error measures. We are interested in both the accuracy (point estimates), and the shape of the posterior distributions. For point estimates x̂, i.e. MAP estimates, we compute the deviation from ground-truth values x∗ in terms of the RMSE over test set observations y∗, RMSE = √ Ey∗[‖x̂− x∗‖2]. The scores are reported both for the main parameter of interest sO2 , and the parameter subspace of sO2 , vhb, amie, which we found to be the only recoverable parameters. Furthermore, we check the re-simulation error: We apply the simulation s(x̂) to the point estimate, and compare the simulation outcome to the conditioning y∗. To evaluate the shape of the posteriors, we compute the calibration error for the sampling-based methods, based on the fraction of ground truth inliers αinl. for corresponding α-confidence-region of the marginal posteriors of x. The reported error is the median of |αinl. − α| over all α. All values are computed over 5000 test-set observations y∗, or 1000 observations in the case of re-simulation error. Each posterior uses 4096 samples, or 256 for ABC; all MAP estimates are found using the mean-shift algorithm. Quantitative results. Evaluation results for all methods are presented in Table 1. The INN matches or outperforms other methods in terms of point estimate error. Its accuracy deteriorates slightly when trained without Lx, and entirely when trained without the conditioning losses Ly and Lz, just as in Fig. 2. For our purpose, the calibration error is the most important metric, as it summarizes the correctness of the whole posterior distribution in one number (see appendix Fig. 11). Here, the INN has a big lead over cVAE(-IAF) and Dropout, and even over ABC due to the low ABC sample count. Qualitative results. Fig. 4 shows generated parameter distributions for one fixed measurement y∗, comparing the INN to cVAE-IAF, Dropout sampling and ABC. The three former methods use a sample count of 160 000 to produce smooth curves. Due to the sparse posteri- ors of 256 samples in the case of ABC, kernel density estimation was applied to its results, with a bandwidth of σ = 0.1. The results produced by the INN provide relevant insights: First, we find that the posteriors for layer thickness d and anisotropy g match the shape of their priors, i.e. y∗ holds no information about these parameters – they are unrecoverable. This finding is supported by the ABC results, whereas the other two methods misleadingly suggest a roughly Gaussian posterior. Second, we find that the sampled distributions for the blood volume fraction vhb and scattering amplitude amie are strongly correlated (rightmost plot). This phenomenon is not an analysis artifact, but has a sound physical explanation: As blood volume fraction increases, more light is absorbed inside the tissue. For the sensor to record the same intensities y∗ as before, scattering must be increased accordingly. In Fig. 10 in the appendix, we show how the INN is applied to real multispectral images. 5 Conclusion We have shown that the full posterior of an inverse problem can be estimated with invertible networks, both theoretically and practically on problems from medicine and astrophysics. We share the excitement of the application experts to develop INNs as a generic tool, helping them to better interpret their data and models, and to improve experimental setups. As a side effect, our results confirm the findings of others that the restriction to coupling layers does not noticeably reduce the expressive power of the network. In summary, we see the following fundamental advantages of our INN-based method compared to alternative approaches: Firstly, one can learn the forward process and obtain the (more complicated) inverse process ‘for free’, as opposed to e.g. cGANs, which focus on the inverse and learn the forward process only implicitly. Secondly, the learned posteriors are not restricted to a particular parametric form, in contrast to classical variational methods. Lastly, in comparison to ABC and related Bayesian methods, the generation of the INN posteriors is computationally very cheap. In future work, we plan to systematically analyze the properties of different invertible architectures, as well as more flexible models utilizing cycle losses, in the context of representative inverse problem. We are also interested in how our method can be scaled up to higher dimensionalities, where MMD becomes less effective. Acknowledgments LA received funding by the Federal Ministry of Education and Research of Germany, project ‘High Performance Deep Learning Framework’ (No 01IH17002). JK, CR and UK received financial support from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation program (grant agreement No 647769). SW and LMH received funding from the European Research Council (ERC) starting grant COMBIOSCOPY (637960). EWP, DR, and RSK acknowledge support by Collaborative Research Centre (SFB 881) ‘The Milky Way System’ (subprojects B1, B2 and B8), the Priority Program SPP 1573 ‘Physics of the Interstellar Medium’ (grant numbers KL 1358/18.1, KL 1358/19.2 and GL 668/2-1) and the European Research Council in the ERC Advanced Grant STARLIGHT (project no. 339177) 1 Proof of correctness of generated posteriors Lemma: If some bijective function f : x→ z transforms a probability density pX(x) to pZ(z), then the inverse function f−1 transforms pZ(z) back to pX(x). Proof: We denote the probability density obtained through the reverse transformation as p∗X(x). Therefore, we have to show that p ∗ X(x) = pX(x). For the forward direction, via the change-of-variables formula, we have pZ(z) = pX ( x = f−1(z) ) ∣∣det[∂z(f−1)]∣∣ (6) with the Jacobian ∂zf−1 ≡ ∂f−1i /∂zj . For the reverse transformation, we have p∗X(x) = pZ ( z = f(x) ) |det[∂xf ]| . (7) We can substitute pZ from Eq. 6 and obtain p∗X(x) = pX ( x = f−1(f(x)) ) ∣∣∣det [(∂z(f−1))(∂xf)]∣∣∣ (8) = pX(x) ∣∣∣det [(∂zf−1)(∂xf)]∣∣∣ (9) = pX(x) |det[I]| = pX(x). (10) In Eq. 9, the Jacobians cancel out due to the inverse function theorem, i.e. the Jacobian ∂z(f −1) is the matrix inverse of ∂xf . Theorem: If an INN f(x) = [y, z] is trained as proposed, and both the supervised loss Ly=E[(y−fy(x))2] and the unsupervised loss Lz=D ( q(y, z), p(y) p(z) ) reach zero, sampling according to Eq. 1 with g=f−1 returns the true posterior p(x |y∗) for any measurement y∗. Proof: We denote the chosen latent distribution as pZ(z), the distribution of observations as pY (y), and the joint distribution of network outputs as q(y, z). As shown by Gretton et al. (2012), if the MMD loss converges to 0, the network outputs follow the prescribed distribution: Lz = 0 ⇐⇒ q(y, z) = pY (y) pZ(z) (11) Suppose we take a posterior conditioned on a fixed y∗, i.e. p(x |y∗), and transform it using the forward pass of our perfectly converged INN. From this we obtain an output distribution q∗(y, z). Because Ly = 0, we know that the output distribution of y (marginalized over z) must be q∗(y) = δ(y−y∗). Also, because of the independence between z and y in the output, the distribution of z-outputs is still q∗(z) = pZ(z). So the joint distribution of outputs is q∗(y, z) = δ(y − y∗) pZ(z) (12) When we invert the network, and repeatedly input y∗ while sampling z ∼ pZ(z), this is the same as sampling [y, z] from the q∗(y, z) above. Using the Lemma from above, we know that the inverted network will output samples from p(x |y∗). Corollary: If the conditions of the theorem above are fulfilled, the unsupervised reverse loss Lx = D ( q(x), pX(x) ) between the marginalized outputs of the inverted network, q(x), and the prior data distribution, pX(x), will also be 0. This justifies using the loss on the prior to speed up convergence, without altering the final results. Proof: Due to the theorem, the estimated posteriors generated by the INN are correct, i.e. q(x |y∗) = p(x |y∗). If they are marginalized over observations y∗ from the training data, then q(x) will be equal to pX(x) by definition. As shown by Gretton et al. (2012), this is equivalent to Lx = 0. 2 Artificial data – Gaussian mixture In Sec. 4.1, we demonstrate that the proposed INN can approximate the true posteriors very well and is not hindered by the required coupling block architecture. Here we show how some existing methods do on the same task, using neural networks of similar size as the INN. cGAN Training a conditional GAN of network size comparable to the INN (counting only the generator) and only two noise dimensions turned out to be challenging. Even with additional pre-training to avoid mode collapse, the individual modes belonging to one label are reduced to nearly one-dimensional structures. Larger cGAN In order to match the results of the INN, we trained a more complex cGAN with 2M parameters instead of the previous 10K, and a latent dimension of 128, instead of 2. To prevent mode collapse, we introduced an additional regularization: an extra loss term forces the variance of generator outputs to match the variance of the training data prior. With these changes, the cGAN can be seen to recover the posteriors reasonably well. Generator + MMD Another option is to keep the cGAN generator the same size as our INN, but replace the discriminator with an MMD loss (cf. Sec. 3.4). This loss receives a concatenation of the generator output x and the label y it was supplied with, and compares these batch-wise with the concatenation of ground truth (x,y)-pairs. Note that in contrast to this, the corresponding MMD loss of the INN only receives x, and no information about y. For this small toy problem, we find that the hand-crafted MMD loss dramatically improves results compared to the smaller learned discriminator. cVAE We also compare to a conditional Variational Autoencoder of same total size as the INN. There is some similarity between the training setup of our method (Fig. 7, right) and that of cVAE (Fig. 7, left), as the forward and inverse pass of an INN can also be seen as an encoder-decoder pair. The main differences are that the cVAE learns the relationship x → y only indirectly, since there is no explicit loss for it, and that the INN requires no reconstruction loss, since it is bijective by construction. cVAE-IAF We adapt the cVAE to use Inverse Autoregressive Flow (Kingma et al., 2016) between the encoder and decoder. On the Gaussian mixture toy problem, the trained cVAE-IAF generates correct posteriors on par with our INN (see Fig. 6). Dropout sampling The method of dropout sampling with learned error terms is by construction not able to produce multi-modal outputs, and therefore fails on this task. 2.1 Latent space analysis To analyze how the latent space of our INN is structured for this task, we choose a fixed label y∗ and sample z from a dense grid. For each z, we compute x through our inverse network and colorize this point in latent (z) space according to the distance from the closest mode in x-space. We can see that our network learns to shape the latent space such that each mode receives the expected fraction of samples (Fig. 8). 3 Artificial data – inverse kinematics A short video demonstrating the structure of our INN’s latent space can be found under https://gfycat.com/SoggyCleanHog, for a slightly different arm setup. The dataset is constucted using gaussian priors xi ∼ N (0, σi), with σ1 = 0.25 and σ2 = σ3 = σ4 = 0.5 ∧ = 28.65◦. The forward process is given by y1 = x1 + l1 sin(x2) + l2 sin(x3 − x2) + l3 sin(x4 − x2 − x3) (13) y2 = l1 cos(x2) + l2 cos(x3 − x2) + l3 cos(x4 − x2 − x3) (14) with the arm lenghts l1 = 0.5, l2 = 0.5, l3 = 1.0. To judge the quality of posteriors, we quantify both the re-simulation error and the calibration error over the test set, as in Sec. 4.2 of the paper. Because of the cheap simulation, we average the re-simulation error over the whole posterior, and not only the MAP estimate. In Table 2, we find that the INN has a clear advantage in both metrics, confirming the observations from Fig. 3. 4 Multispectral measurements of biological tissue The following figure shows the results when the INN trained in Sec. 4.2 is applied pixel-wise to multispectral endoscopic footage. In addition to estimating the oxygenation sO2 , we measure the uncertainty in the form of the 68% confidence interval. a) Median sO2 0.40 0.45 0.50 0.55 0.60 0.65 0.70 b) Est. uncertainty 0.05 0.06 0.07 0.08 0.09 0.10 0.11 0.12 0.13 c) RGB image 5 Star cluster spectral data Results for one specific y are shown in Fig. 5. Note that our network recovers a decidedly multimodal distribution of x that visibly deviates from the prior p(x). Note also the strong correlations in the system. For example, the measurements y∗ investigated may correspond to a young cluster with large expansion velocity, or to an older system that expands slowly. Finding these ambiguities in p(x |y∗) and identifying degeneracies in the underlying model are pivotal aspects of astrophysical research, and a method to effectively approximate full posterior distributions has the potential to lead to a major breakthrough in this field. 6 Calibration curve for tissue parameter estimation In Sec. 4.2, we report the median calibration error for each method. The following figure plots the calibration error, qinliers− q, against the level of confidence q. Negative values mean that a model is overconfident, while positive values say the opposite. 7 Approximate Bayesian computation (ABC) While there is a whole field of research concerned with ABC approaches and their efficiencyaccuracy tradeoffs, our use of the method here is limited to the essential principle of rejection sampling. When we require N samples of x from the posterior p(x |y∗) conditioned on some y∗, there are two basic ways to obtain them: Threshold: We set an acceptance threshold , repeatedly draw x-samples from the prior, compute the corresponding y-values (via simulation) and keep those where dist(y,y∗) < , until we have accepted N samples. The smaller we want , the more simulations have to be run, which is why we use this approach only for the experiment in Sec. 4.1, where we can afford to run the forward process millions or even billions of times. Quantile: Alternatively, we choose what quantile q of samples shall be accepted, and then run exactly N/q simulations. All sampled pairs (x,y) are sorted by dist(y,y∗) and the N closest to y∗ form the posterior. This allows for a more predictable runtime when the simulations are costly, as in the medical application in Sec. 4.2 where q = 0.005. 8 Details of datasets and network architectures Table 3 summarizes the datasets used throughout the paper. The architecture details are given in the following. 8.1 Artificial data – Gaussian mixture INN: 3 invertible blocks, 3 fully connected layers per affine coefficient function with ReLU activation functions in the intermediate layers, zero padding to a nominal dimension of 16, Adam optimizer, decaying learning rate from 10−3 to 10−5, batch size 200. The inverse multiquadratic kernel was used for MMD, with h = 0.2 in both x- and z-space. Dropout sampling: 6 fully connected layers with ReLU activations, Adam optimizer, learning rate decay from 10−3 to 10−5, batch size 200, dropout probability p = 0.2. cGAN: 6 fully connected layers for the generator and 8 for the discriminator, all with leaky ReLU activations. Adam was used for the generator, SGD for the discriminator, learning rates decaying from 2 · 10−3 to 2 · 10−6, batch size 256. Initially 100 iterations training with L = 1N ∑ i ‖g(zi, yi)− xi‖22, to separate the differently labeled modes, followed by pure GAN training. Larger cGAN: 2 fully connected layers with 1024 neurons each for discriminator and generator, batch size 512, Adam optimizer with learning rate 8 · 10−4 for the generator, SGD with learning rate 1.2 ·10−3 and momentum 0.05 for the discriminator, 1.6 ·10−3 weight decay for both, 0.25 dropout probabiliy for the generator at training and test time. Equal weighting of discriminator loss and penalty of output variance L = (Vari[g(zi, yi)]−Vari[xi])2 Generator with MMD: 8 fully connected layers with leaky ReLU activations, Adam optimizer, decaying learning rate from 10−3 to 10−6, batch size 256. Inverse multiquadratic kernel, h = 0.5. cVAE: 3 fully connected layers each for encoder and decoder, ReLU activations, learning rate 2 · 10−2, decay to 2.5 · 10−5, Adam optimizer, batch size 25, reconstruction loss weighted 50:1 versus KL divergence loss. 8.2 Artificial data – inverse kinematics INN: 6 affine coupling blocks with 3 fully connected layers each and leaky ReLU activations. Adam optimizer, decaying learning rate from 10−2 to 10−4, multiquadratic kernel with h = 1.2. cVAE: 4 fully connected layers each for encoder and decoder, ReLU activations, learning rate 5 ·10−3, decay to 1.6 ·10−5, Adam optimizer, batch size 250, reconstruction loss weighted 15:1 versus KL divergence loss. 8.3 Functional parameter estimation from multispectral tissue images INN: 3 invertible blocks, 4 fully connected layers per affine coefficient function with leaky ReLUs in the intermediate layers, zero padding to double the original width. Adam optimizer, learning rate decay from 2 · 10−3 to 2 · 10−5, batch size 200. Inverse multiquadratic kernel with h = 1, weighted MMD terms by observation distance with decaying γ = 0.2 to 0. Dropout sampling/point estimate: 8 fully connected layers, ReLU activations, Adam with decaying learning rate from 10−2 to 10−5, batch size 100, dropout probability p = 0.2. cVAE: 4 fully connected layers each for encoder and decoder, ReLU activations, learning rate 10−3, decay to 3.2 · 10−6, Adam optimizer, batch size 25, reconstruction loss weighted 103:1 versus KL divergence loss. 8.4 Impact of star clusters on the dynamical evolution of the galactic gas INN: 5 invertible blocks, 4 fully connected layers per affine coefficient function with leaky ReLUs in the intermediate layers, no additional zero padding. Adam optimizer with decaying learning rate from 2 · 10−3 to 1.5 · 10−6, batch size 500. Kernel for latent space: k(z, z′) = exp(−‖(z − z′)/h‖2) with h = 7.1. Kernel for x-space: k(x,x′) = −‖x − x′‖1/41/2. Due to the complex nature of the prior distributions, this was the kernel found to capture the details correctly, whereas the peak of the inverse multiquadratic kernel was too broad for this purpose.
1. What are the strengths and advantages of the proposed INN method compared to other generative methods such as GAN and VAE? 2. How does the bidirectional training work, and what are the choices and considerations behind the loss ratios and iteration numbers? 3. Can you elaborate on the purpose and effectiveness of padding the input and output of the network with zeros, and how does it impact the propagation of information among variables? 4. Are there any limitations or challenges when applying INN to high-dimensional data, such as images, and how might they be addressed?
Review
Review While the invertible model structure itself is essentially the same as Real-NVP, the use of observation variables in the framework with theoretically sound bidirectional training for safe use of the seemingly naïve inclusion of y (i.e., y and z can be independent). Its abilities to model the posterior distributions of the inputs are supported by both quantitative and qualitative experiments. The demonstration on practical examples is a plus. The advantage of INN, however, is not crystal clear to me versus other generative methods such as GAN and VAE. This is an interesting paper overall, so I am looking forward for further discussions. Pros: 1. Extensive analyses of the possibility of modeling posterior distributions with an INN have been shown. Detailed experiment setups are provided in the appendix. 2. The theoretical guarantee (with some assumptions) of the true posterior might be beneficial in practice for relatively low-dimensional or less complex tasks. Comments/Questions: 1. From the generative model point of view, could the authors elaborate on the comparison against cGAN (aside from the descriptions in Appendix 2)? It is quoted “cGAN…often lack satisfactory diversity in practice”. Also, can cGAN be used estimate the density of X (posterior or not)? 2. For the bidirectional training, did the ratios of the losses (L_z, L_y, L_x) have to be changed, or the iterations of forward/backward trainings have to be changed (e.g., 1 forward, 1 backward vs. 2 forward, 1 backward)? This question comes from my observation that the nature of the losses, especially for L_y vs. L_y,L_x (i.e., SL vs. USL) seem to be different. 3. “we find it advantageous to pad both the in- and output of the network with equal number of zeros”: Is this to effectively increase the intermediate network dimensions? Also, does this imply that for both forward and inverse process those zero-padded entries always come out to be zero? It seems that there needs some way to enforce them to be zero to ensure that the propagation happens only among the entries belonging to the variables of interests (x, y and z). 4. It seems that most of the experiments are done in relatively small dimensional data. This is not necessarily a drawback, I am curious if this model could succeed on higher dimensional data (e.g., image), especially with the observation y.
ICLR
Title Solving stochastic weak Minty variational inequalities without increasing batch size Abstract This paper introduces a family of stochastic extragradient-type algorithms for a class of nonconvex-nonconcave problems characterized by the weak Minty variational inequality (MVI). Unlike existing results on extragradient methods in the monotone setting, employing diminishing stepsizes is no longer possible in the weak MVI setting. This has led to approaches such as increasing batch sizes per iteration which can however be prohibitively expensive. In contrast, our proposed methods involves two stepsizes and only requires one additional oracle evaluation per iteration. We show that it is possible to keep one fixed stepsize while it is only the second stepsize that is taken to be diminishing, making it interesting even in the monotone setting. Almost sure convergence is established and we provide a unified analysis for this family of schemes which contains a nonlinear generalization of the celebrated primal dual hybrid gradient algorithm. N/A 1 Introduction Stochastic first-order methods have been at the core of the current success in deep learning applications. These methods are mostly well-understood for minimization problems at this point. This is even the case in the nonconvex setting where there exists matching upper and lower bounds on the complexity for finding an approximately stable point (Arjevani et al., 2019). The picture becomes less clear when moving beyond minimization into nonconvex-nonconcave minimax problems—or more generally nonmonotone variational inequalities. Even in the deterministic case, finding a stationary point is in general intractable (Daskalakis et al., 2021; Hirsch & Vavasis, 1987). This is in stark contrast with minimization where only global optimality is NP-hard. An interesting nonmonotone class for which we do have efficient algorithms is characterized by the so called weak Minty variational inequality (MVI) (Diakonikolas et al., 2021). This problem class captures nontrivial structures such as attracting limit cycles and is governed by a parameter ρ whose negativity increases the degree of nonmonotonicity. It turns out that the stepsize γ for the exploration step in extragradient-type schemes lower bounds the problem class through ρ > −γ/2 (Pethick et al., 2022). In other words, it seems that we need to take γ large to guarantee convergence for a large class. This reliance on a large stepsize is at the core of why the community has struggled to provide a stochastic variants for weak MVIs. The only known results effectively increase the batch size at every iteration (Diakonikolas et al., 2021, Thm. 4.5)—a strategy that would be prohibitively expensive in most machine learning applications. Pethick et al. (2022) proposed (SEG+) which attempts to tackle the noise by only diminishing the second stepsize. This suffices in the special case of unconstrained quadratic games but can fail even in the monotone case as illustrated in Figure 1. This naturally raises the following research question: Can stochastic weak Minty variational inequalities be solved without increasing the batch size? We resolve this open problem in the affirmative when the stochastic oracles are Lipschitz in mean, with a modification of stochastic extragradient called bias-corrected stochastic extragradient (BCSEG+). The scheme only requires one additional first order oracle call, while crucially maintaining the fixed stepsize. Specifically, we make the following contributions: ∗Laboratory for Information and Inference Systems (LIONS), EPFL ([email protected]) †Laboratoire Traitement et Communication d’Information, Télécom Paris, Institut Polytechnique de Paris ‡Department of Electrical Engineering (ESAT-STADIUS), KU Leuven (i) We show that it is possible to converge for weak MVI without increasing the batch size, by introducing a bias-correction term. The scheme introduces no additional hyperparameters and recovers the maximal range ρ ∈ (−γ/2,∞) of explicit deterministic schemes. The rate we establish is interesting already in the star-monotone case where only asymptotic convergence of the norm of the operator was known when refraining from increasing the batch size (Hsieh et al., 2020, Thm. 1). Our result additionally carries over to another class of problem treated in Appendix G, which we call negative weak MVIs. (ii) We generalize the result to a whole family of schemes that can treat constrained and regularized settings. First and foremost the class includes a generalization of the forward-backwardforward (FBF) algorithm of Tseng (2000) to stochastic weak MVIs. The class also contains a stochastic nonlinear extension of the celebrated primal dual hybrid gradient (PDHG) algorithm (Chambolle & Pock, 2011). Both methods are obtained as instantiations of the same template scheme, thus providing a unified analysis and revealing an interesting requirement on the update under weak MVI when only stochastic feedback is available. (iii) We prove almost sure convergence under the classical Robbins-Monro stepsize schedule of the second stepsize. This provides a guarantee on the last iterate, which is especially important in the nonmonotone case, where average guarantees cannot be converted into a single candidate solution. Almost sure convergence is challenging already in the monotone case where even stochastic extragradient may not converge (Hsieh et al., 2020, Fig. 1). 2 Related work Weak MVI Diakonikolas et al. (2021) was the first to observe that an extragradient-like scheme called extragradient+ (EG+) converges globally for weak MVIs with ρ ∈ (−1/8LF ,∞). This results was later tightened to ρ ∈ (−1/2LF ,∞) and extended to constrained and regularized settings in (Pethick et al., 2022). A single-call variant has been analysed in Böhm (2022). Weak MVI is a star variant of cohypomonotonicity, for which an inexact proximal point method was originally studied in Combettes & Pennanen (2004). Later, a tight characterization was carried out by Bauschke et al. (2021) for the exact case. It was shown that acceleration is achievable for an extragradient-type scheme even for cohypomonotone problems (Lee & Kim, 2021). Despite this array of positive results the stochastic case is largely untreated for weak MVIs. The only known result (Diakonikolas et al., 2021, Theorem 4.5) requires the batch size to be increasing. Similarly, the accelerated method in Lee & Kim (2021, Thm. 6.1) requires the variance of the stochastic oracle to decrease as O(1/k). Stochastic & monotone When more structure is present the story is different since diminishing stepsizes becomes permissible. In the monotone case rates for the gap function was obtained for stochastic Mirror-Prox in Juditsky et al. (2011) under bounded domain assumption, which was later relaxed for the extragradient method under additional assumptions (Mishchenko et al., 2020). The norm of the operator was shown to asymptotically converge for unconstrained MVIs in Hsieh et al. (2020) with a double stepsize policy. There exists a multitude of extensions for monotone problems: Single-call stochastic methods are covered in detail by Hsieh et al. (2019), variance reduction was applied to Halpern-type iterations (Cai et al., 2022), cocoercivity was used in Beznosikov et al. (2022), and bilinear games studied in Li et al. (2022). Beyond monotonicity, a range of structures have been explored such as MVIs (Song et al., 2020), pseudomonotonicity (Kannan & Shanbhag, 2019; Boţ et al., 2021), two-sided Polyak-Łojasiewicz condition (Yang et al., 2020), expected cocoercivity (Loizou et al., 2021), sufficiently bilinear (Loizou et al., 2020), and strongly star-monotone (Gorbunov et al., 2022). Variance reduction The assumptions we make about the stochastic oracle in Section 3 are similar to what is found in the variance reduction literature (see for instance Alacaoglu & Malitsky (2021, Assumption 1) or Arjevani et al. (2019)). However, our use of the assumption are different in a crucial way. Whereas the variance reduction literature uses the stepsize γ ∝ 1/LF̂ (see e.g. Alacaoglu & Malitsky (2021, Theorem 2.5)), we aim at using the much larger γ ∝ 1/LF . For instance, in the special case of a finite sum problem of size N, the mean square smoothness constant LF̂ from Assumption III can be √ N times larger than LF (see Appendix I for details). This would lead to a prohibitively strict requirement on the degree of allowed nonmonotonicity through the relationship ρ > −γ/2. Bias-correction The idea of adding a correction term has also been exploited in minimization, specifically in the context of compositional optimization Chen et al. (2021). Due to their distinct problem setting it suffices to simply extend stochastic gradient descent (SGD), albeit under additional assumptions such as (Chen et al., 2021, Assumption 3). In our setting, however, SGD is not possible even when restricting ourselves to monotone problems. 3 Problem formulation and preliminaries We are interested in finding z ∈ n such that the following inclusion holds, 0 ∈ Tz := Az + Fz. (3.1) A wide range of machine learning applications can be cast as an inclusion. Most noticeable, a structured minimax problem can be reduced to (3.1) as shown in Section 8.1. We will rely on common notation and concepts from monotone operators (see Appendix B for precise definitions). Assumption I. In problem (3.1), (i) The operator F : n → n is LF-Lipschitz with LF ∈ [0,∞), i.e., ∥Fz − Fz′∥ ≤ LF∥z − z′∥ ∀z, z′ ∈ n. (3.2) (ii) The operator A : n ⇒ n is a maximally monotone operator. (iii) Weak Minty variational inequality (MVI) holds, i.e., there exists a nonempty set S⋆ ⊆ zer T such that for all z⋆ ∈ S⋆ and some ρ ∈ (− 12LF ,∞) ⟨v, z − z⋆⟩ ≥ ρ∥v∥2, for all (z, v) ∈ gph T. (3.3) Remark 1. In the unconstrained and smooth case (A ≡ 0), Assumption I(iii) reduces to ⟨Fz, z−z⋆⟩ ≥ ρ∥Fz∥2 for all z ∈ n. When ρ = 0 this condition reduces to the MVI (i.e. star-monotonicity), while negative ρ makes the problem increasingly nonmonotone. Interestingly, the inequality is not symmetric and one may instead consider that the assumption holds for −F. Through this observation, Appendix G extends the reach of the extragradient-type algorithms developed for weak MVIs. Stochastic oracle We assume that we cannot compute Fz easily, but instead we have access to the stochastic oracle F̂(z, ξ), which we assume is unbiased with bounded variance. We additionally assume that z 7→ F̂(z, ξ) is LF̂ Lipschitz continuous in mean and that it can be simultaneously queried under the same randomness. Assumption II. For the operator F̂(·, ξ) : n → n the following holds. (i) Two-point oracle: The stochastic oracle can be queried for any two points z, z′ ∈ n, F̂(z, ξ), F̂(z′, ξ) where ξ ∼ P. (3.4) (ii) Unbiased: Eξ [ F̂(z, ξ) ] = Fz ∀z ∈ n. (iii) Bounded variance: Eξ [ ∥F̂(z, ξ) − F̂(z)∥2 ] ≤ σ2F ∀z ∈ n. Assumption III. The operator F̂(·, ξ) : n → n is Lipschitz continuous in mean with LF̂ ∈ [0,∞): Eξ [ ∥F̂(z, ξ) − F̂(z′, ξ)∥2 ] ⩽ L2 F̂ ∥z − z′∥2 for all z, z′ ∈ n. (3.5) Remark 2. Assumptions II(i) and III are also common in the variance reduction literature (Fang et al., 2018; Nguyen et al., 2019; Alacaoglu & Malitsky, 2021), but in contrast with variance reduction we will not necessarily need knowledge of LF̂ to specify the algorithm, in which case the problem constant will only affect the complexity. Crucially, this decoupling of the stepsize from LF̂ will allow the proposed scheme to converge for a larger range of ρ in Assumption I(iii). Finally, note that Assumption II(i) commonly holds in machine learning applications, where usually the stochasticity is induced by the sampled mini-batch. 4 Method To arrive at a stochastic scheme for weak MVI we first need to understand the crucial ingredients in the deterministic setting. For simplicity we will initially consider the unconstrained and smooth Algorithm 1 (BC-SEG+) Stochastic algorithm for problem (3.1) when A ≡ 0 Require z−1 = z̄−1 = z0 ∈ n αk ∈ (0, 1), γ ∈ (⌊−2ρ⌋+, 1/LF) Repeat for k = 0, 1, . . . until convergence 1.1: Sample ξk ∼ P 1.2: z̄k = zk − γF̂(zk, ξk) + (1 − αk) ( z̄k−1 − zk−1 + γF̂(zk−1, ξk) ) 1.3: Sample ξ̄k ∼ P 1.4: zk+1 = zk − αkγF̂(z̄k, ξ̄k) Return zk+1 setting, i.e. A ≡ 0 in (3.1). The first component is taking the second stepsize α smaller as done in extragradient+ (EG+), z̄k = zk − γFzk zk+1 = zk − αγFz̄k (EG+) where α ∈ (0, 1). Convergence in weak MVI was first shown in Diakonikolas et al. (2021) and later tightened by Pethick et al. (2022), who characterized that smaller α allows for a larger range of the problem constant ρ. Taking α small is unproblematic for a stochastic scheme where usually the stepsize is taken diminishing regardless. However, Pethick et al. (2022) also showed that the extrapolation stepsize γ plays a critical role for convergence under weak MVI. Specifically, they proved that a larger stepsize γ leads to a looser bound on the problem class through ρ > −γ/2. While a lower bound has not been established we provide an example in Figure 3 of Appendix H where small stepsize prevents convergence. Unfortunately, picking γ large (e.g. as γ = 1/LF) causes significant complications in the stochastic case where both stepsizes are usually taken to be diminishing as in the following scheme, z̄k = zk − βkγF̂(zk, ξk) with ξk ∼ P zk+1 = zk − αkγF̂(z̄k, ξ̄k) with ξ̄k ∼ P (SEG) where αk = βk ∝ 1/k. Even with a two-timescale variant (when βk > αk) it has only been possible to show convergence for MVI (i.e. when ρ = 0) (Hsieh et al., 2020). Instead of decreasing both stepsizes, Pethick et al. (2022) proposes a scheme that keeps the first stepsize constant, z̄k = zk − γF̂(zk, ξk) with ξk ∼ P zk+1 = zk − αkγF̂(z̄k, ξ̄k) with ξ̄k ∼ P (SEG+) However, (SEG+) does not necessarily converge even in the monotone case as we illustrate in Figure 1. The non-convergence stems from the bias term introduced by the randomness of z̄k in F̂(z̄k, ξ̄k). Intuitively, the role of z̄k is to approximate the deterministic exploration step ˜̄zk := zk − γFzk. While z̄k is an unbiased estimate of ˜̄zk this does not imply that F̂(z̄k, ξ̄k) is an unbiased estimate of F(˜̄zk). Unbiasedness only holds in special cases, such as when F is linear and A ≡ 0 for which we show convergence of (SEG+) in Section 5 under weak MVI. In the monotone case it suffice to take the exploration stepsize γ diminishing (Hsieh et al., 2020, Thm. 1), but this runs counter to the fixed stepsize requirement of weak MVI. Instead we propose bias-corrected stochastic extragradient+ (BC-SEG+) in Algorithm 1. BC-SEG+ adds a bias correction term of the previous operator evaluation using the current randomness ξk. This crucially allows us to keep the first stepsize fixed. We further generalize this scheme to constrained and regularized setting with Algorithm 2 by introducing the use of the resolvent, (id + γA)−1. 5 Analysis of SEG+ In the special case where F is affine and A ≡ 0 we can show convergence of (SEG+) under weak MVI up to arbitrarily precision even with a large stepsize γ. Theorem 5.1. Suppose that Assumptions I and II hold. Assume Fz := Bz + v and choose αk ∈ (0, 1) and γ ∈ (0, 1/LF) such that ρ ≥ γ(αk − 1)/2. Consider the sequence (zk)k∈ generated by (SEG+). Then for all z⋆ ∈ S⋆, K∑ k=0 αk∑K j=0 α j E∥Fzk∥2 ≤ ∥z 0−z⋆∥2+γ2(γ2L2F+1)σ2F ∑K j=0 α 2 j γ2(1−γ2L2F ) ∑K j=0 α j . (5.1) The underlying reason for this positive results is that F̂(z̄k, ξ̄k) is unbiased when F is linear. This no longer holds when either linearity of F is dropped or when the resolvent is introduced for A . 0, in which case the scheme only converges to a γ-dependent neighborhood as illustrated in Figure 1. This is problematic in weak MVI where γ cannot be taken arbitrarily small (see Figure 3 of Appendix H). 6 Analysis for unconstrained and smooth case For simplicity we first consider the case where A ≡ 0. To mitigate the bias introduced in F(z̄k, ξ̄k) for (SEG+), we propose Algorithm 1 which modifies the exploration step. The algorithm can be seen as a particular instance of the more general scheme treated in Section 7. Theorem 6.1. Suppose that Assumptions I to III hold. Suppose in addition that γ ∈ (⌊−2ρ⌋+, 1/LF) and (αk)k∈ ⊂ (0, 1) is a diminishing sequence such that 2γLF̂ √ α0 + ( 1 + ( 1+γ2L2F 1−γ2L2F γ2L2F ) γ2L2 F̂ ) α0 ≤ 1 + 2ργ . (6.1) Then, the following estimate holds for all z⋆ ∈ S⋆ E[∥F(zk⋆ )∥2] ≤ (1 + ηγ2L2F)∥z0 − z⋆∥2 +Cσ2Fγ2 ∑K j=0 α 2 j µ ∑K j=0 α j (6.2) where C = 1 + 2η ( (γ2L2 F̂ + 1) + 2α0 ) , η = 12 1+γ2L2F 1−γ2L2F γ2L2F + 1 γLF̂ √ α0 , µ = γ2(1 − γ2L2F)/2 and k⋆ is chosen from {0, 1, . . . ,K} according to probability P[k⋆ = k] = αk∑K j=0 α j . Remark 6.2. As α0 → 0, the requirement (6.1) reduces to ρ > −γ/2 as in the deterministic setting of Pethick et al. (2022). Letting αk = α0/ √ k+r the rate becomes O(1/√k), thus matching the rate for the gap function of stochastic extragradient in the monotone case (see e.g. Juditsky et al. (2011)). The above result provides a rate for a random iterate as pioneered by Ghadimi & Lan (2013). Showing last iterate results even asymptotically is more challenging. Already in the monotone case, vanilla (SEG) (where βk = αk) only has convergence guarantees for the average iterate (Juditsky et al., 2011). In fact, the scheme can cycle even in simple examples (Hsieh et al., 2020, Fig. 1). Under the classical (but more restrictive) Robbins-Monro stepsize policy, it is possible to show almost sure convergence for the iterates generates by Algorithm 1. The following theorem demonstrates the result in the particular case of αk = 1/k+r. The more general statement is deferred to Appendix D. Theorem 6.3 (almost sure convergence). Suppose that Assumptions I to III hold. Suppose γ ∈ (⌊−2ρ⌋+, 1/LF), αk = 1k+r for any positive natural number r and (γLF̂ + 1)αk + 2 ( 1+γ2L2F 1−γ2L2F γ4L2F L 2 F̂ αk+1 + γLF̂ ) (αk+1 + 1)αk+1 ≤ 1 + 2ργ . (6.3) Algorithm 2 (BC-PSEG+) Stochastic algorithm for problem (3.1) Require z−1 = z0 ∈ n, h−1 ∈ n, αk ∈ (0, 1), γ ∈ (⌊−2ρ⌋+, 1/LF) Repeat for k = 0, 1, . . . until convergence 2.1: Sample ξk ∼ P 2.2: hk = ( zk − γF̂(zk, ξk) ) + (1 − αk) ( hk−1 − (zk−1 − γF̂(zk−1, ξk))) 2.3: z̄k = (id + γA)−1hk 2.4: Sample ξ̄k ∼ P 2.5: zk+1 = zk − αk ( hk − z̄k + γF̂(z̄k, ξ̄k) ) Return zk+1 Then, the sequence (zk)k∈ generated by Algorithm 1 converges almost surely to some z ⋆ ∈ zer T. Remark 6.4. As αk → 0 the condition on ρ reduces to ρ > −γ/2 like in the deterministic case. To make the results more accessible, both theorems have made particular choices of the free parameters from the proof, that ensures convergence for a given ρ and γ. However, since the parameters capture inherent tradeoffs, the choice above might not always provide the tightest rate. Thus, the more general statements of the theorems have been preserved in the appendix. 7 Analysis for constrained case The result for the unconstrained smooth case can be extended when the resolvent is available. Algorithm 2 provides a direct generalization of the unconstrained Algorithm 1. The construction relies on approximating the deterministic algorithm proposed in Pethick et al. (2022), which iteratively projects onto a half-space which is guaranteed to contain the solutions. By defining Hz = z − γFz, the scheme can concisely be written as, z̄k = (I + γA)−1(Hzk) zk+1 = zk − αk(Hzk − Hz̄k), (CEG+) for a particular adaptive choice of αk ∈ (0, 1). With a fair amount of hindsight we choose to replace Hzk with the bias-corrected estimate hk (as defined in Step 2.2 in Algorithm 2), such that the estimate is also reused in the second update. Theorem 7.1. Suppose that Assumptions I to III hold. Moreover, suppose that αk ∈ (0, 1), γ ∈ (⌊−2ρ⌋+, 1/LF) and the following holds, µ B 1−√α0 1+ √ α0 − α0(1 + 2γ2L2F̂η) + 2ρ γ > 0 (7.1) where η ≥ 1√ α0(1−γ2L2F ) + 1−√α0√ α0 . Consider the sequence (zk)k∈ generated by Algorithm 2. Then, the following estimate holds for all z⋆ ∈ S⋆ E[dist(0,T z̄k⋆ )2] ≤ E[∥z0 − z⋆∥2] + ηE[∥h−1 − Hz−1∥2] +Cγ2σ2F ∑K j=0 α 2 j γ2µ ∑K j=0 α j where C = 1 + 2η(1 + γ2L2 F̂ ) + 2α0η and k⋆ is chosen from {0, 1, . . . ,K} according to probability P[k⋆ = k] = αk∑K j=0 α j . Remark 3. The condition on ρ in (7.1) reduces to ρ > −γ/2 when α0 → 0 as in the deterministic case. As oppose to Theorem 6.3 which tracks ∥Fzk∥2, the convergence measure of Theorem 7.1 reduces to dist(0,T z̄k)2 = ∥Fz̄k∥2 when A ≡ 0. Since Algorithm 1 and Algorithm 2 coincide when A ≡ 0, Theorem 7.1 also applies to Algorithm 1 in the unconstrained case. Consequently, we obtain rates for both ∥Fz̄k∥2 and ∥Fzk∥2 in the unconstrained smooth case. 8 Asymmetric & nonlinear preconditioning In this section we show that the family of stochastic algorithms which converges under weak MVI can be expanded beyond Algorithm 2. This is achieved by extending (CEG+) through introducing Algorithm 3 Nonlinearly preconditioned primal dual extragradient (NP-PDEG) for solving (8.5) Require z−1 = z0 = (x0, y0) with x0, x−1, x̂−1, x̄−1 ∈ n, y0, y−1 ∈ r, θ ∈ [0,∞), Γ1 ≻ 0, Γ2 ≻ 0 Repeat for k = 0, 1, . . . until convergence 3.1: ξk ∼ P 3.2: x̂k = xk − Γ1∇xφ̂(zk, ξk) + (1 − αk) ( x̂k−1 − xk−1 + Γ1∇xφ̂(xk−1, yk−1, ξk) ) 3.3: x̄k = proxΓ −1 1 f ( x̂k ) 3.4: ξ′k ∼ P 3.5: ŷk = yk + Γ2 ( θ∇yφ̂(x̄k, yk, ξ′k) + (1 − θ)∇yφ̂(zk, ξk) ) 3.6: +(1 − αk) ( ŷk−1 − yk−1 − Γ2 ( θ∇yφ̂(x̄k−1, yk−1, ξ′k) + (1 − θ)∇yφ̂(zk−1, ξk) )) 3.7: ȳk = proxΓ −1 2 g ( ŷk ) 3.8: ξ̄k ∼ P 3.9: xk+1 = xk + αk ( x̄k − x̂k − Γ1∇xφ̂(z̄k, ξ̄k) ) 3.10: yk+1 = yk + αk ( ȳk − ŷk + Γ2∇yφ̂(z̄k, ξ̄k) ) Return zk+1 = (xk+1, yk+1) a nonlinear and asymmetrical preconditioning. Asymmetrical preconditioning has been used in the literature to unify a large range of algorithm in the monotone setting Latafat & Patrinos (2017). A subtle but crucial difference, however, is that the preconditioning considered here depends nonlinearly on the current iterate. As it will be shown in Section 8.1 this nontrivial feature is the key for showing convergence for primal-dual algorithms in the nonmonotone setting. Consider the following generalization of (CEG+) by introducing a potentially asymmetric nonlinear preconditioning Pzk that depends on the current iterate zk. find z̄k such that Hzk (zk) ∈ Pzk (z̄k) + A(z̄k), (8.1a) update zk+1 = zk + αΓ ( Hzk (z̄k) − Hzk (zk) ) . (8.1b) where Hu(v) B Pu(v) − F(v) and Γ is some positive definite matrix. The iteration independent and diagonal choice Pzk = γ−1I and Γ = γI correspond to the basic (CEG+). More generally we consider Pu(z) B Γ−1z + Qu(z) (8.2) where Qu(z) captures the nonlinear and asymmetric part, which ultimately enables alternating updates and relaxing the Lipschitz conditions (see Remark 8.1(ii)). Notice that the iterates above does not always yield well-defined updates and one must inevitably impose additional structures on the preconditioner (we provide sufficient condition in Appendix F.1). Consistently with (8.2), in the stochastic case we define P̂u(z, ξ) B Γ−1z + Q̂u(z, ξ). (8.3) The proposed stochastic scheme, which introduces a carefully chosen bias-correction term, is summarized as compute hk = P̂zk (zk, ξk) − F̂(zk, ξk) + (1 − αk) ( hk−1 − P̂zk−1 (zk−1, ξk) + F̂(zk−1, ξk) (8.4a) − Q̂zk−1 (z̄k−1, ξ′k−1) + Q̂zk−1 (z̄k−1, ξ′k) ) with ξk, ξ′k ∼ P find z̄k such that hk ∈ P̂zk (z̄k, ξ′k) + Az̄k (8.4b) update zk+1 = zk + αkΓ ( P̂zk (z̄k, ξ̄k) − F̂(z̄k, ξ̄k) − hk ) with ξ̄k ∼ P (8.4c) Remark 4. The two additional terms in (8.4a) are due to the interesting interplay between weak MVI and stochastic feedback, which forces a change of variables (see Appendix F.4). To make a concrete choice of Q̂u(z, ξ) we will consider a minimax problem as a motivating example (see Appendix F.1 for a more general setup). 8.1 Nonlinearly preconditioned primal dual hybrid gradient We consider the problem of minimize x∈ n maximize y∈ r f (x) + φ(x, y) − g(y). (8.5) where φ(x, y) := Eξ[φ̂(x, y, ξ)]. The first order optimality conditions may be written as the inclusion 0 ∈ Tz B Az + Fz, where A = (∂ f , ∂g), F(z) = (∇xφ(z),−∇yφ(z)), (8.6) while the algorithm only has access to the stochastic estimates F̂(z, ξ) B (∇xφ̂(z, ξ),−∇yφ̂(z, ξ)). Assumption IV. For problem (8.5), let the following hold with a stepsize matrix Γ = blkdiag(Γ1,Γ2) where Γ1 ∈ n and Γ2 ∈ r are symmetric positive definite matrices: (i) f , g are proper lsc convex (ii) φ : n+r → is continuously differentiable and for some symmetric positive definite matrices Dxx,Dxy,Dyx,Dyy, the following holds for all z = (x, y), z′ = (x′, y′) ∈ n+r ∥∇xφ(z′) − ∇xφ(z)∥2Γ1 ≤ L 2 xx∥x′ − x∥2Dxx + L 2 xy∥y′ − y∥2Dxy , ∥∇yφ(z′) − θ∇yφ(x′, y) − (1 − θ)∇yφ(z)∥2Γ2 ≤ L 2 yx∥x′ − x∥2Dyx + L 2 yy∥y′ − y∥2Dyy . (iii) Stepsize condition: L2xxDxx + L 2 yxDyx ≺ Γ−11 and L2xyDxy + L2yyDyy ≺ Γ−12 . (iv) Bounded variance: Eξ [ ∥F̂(z, ξ) − F̂(z′, ξ)∥2 Γ ] ≤ σ2F ∀z, z′ ∈ n. (v) φ̂(·, ξ) : n+r → is continuously differentiable and for some symmetric positive definite matrices Dx̂z,Dŷz,Dŷx,Dŷy, the following holds for all z = (x, y), z′ = (x′, y′) ∈ n+r and v, v′ ∈ n for θ ∈ [0,∞): Eξ [ ∥∇xφ̂(z′, ξ) − ∇xφ̂(z, ξ)∥2Γ1 ] ≤ L2x̂z∥z ′ − z∥2Dx̂z if θ , 1: Eξ [ ∥∇yφ̂(z, ξ) − ∇yφ̂(z′, ξ)∥2Γ2 ] ≤ L2ŷz∥z ′ − z∥2Dŷz if θ , 0: Eξ [ ∥∇yφ̂(v′, y′, ξ) − ∇yφ̂(v, y, ξ)∥2Γ2 ] ≤ L2ŷx∥v ′ − v∥2Dŷx + L 2 ŷy∥y ′ − y∥2Dŷy . Remark 8.1. In Algorithm 3 the choice of θ ∈ [0,∞) leads to different algorithmic oracles and underlying assumptions in terms of Lipschitz continuity in Assumptions IV(ii) and IV(v). (i) If θ = 0 then the first two steps may be computed in parallel and we recover Algorithm 2. Moreover, to ensure Assumption IV(ii) in this case it suffices to assume for Lx, Ly ∈ [0,∞), ∥∇xφ(z′) − ∇xφ(z)∥ ≤ Lx∥z′ − z∥, ∥∇yφ(z′) − ∇yφ(z)∥ ≤ Ly∥z′ − z∥. (ii) Taking θ = 1 leads to Gauss-Seidel updates and a nonlinear primal dual extragradient algorithm with sufficient Lipschitz continuity assumptions for some Lx, Ly ∈ [0,∞), ∥∇xφ(z′) − ∇xφ(z)∥ ≤ Lx∥z′ − z∥, ∥∇yφ(z′) − ∇yφ(x′, y)∥ ≤ Ly∥y′ − y∥. Algorithm 3 is an application of (8.4) applied for solving (8.6). In order to cast the algorithm as an instance of the template algorithm (8.4), we choose the positive definite stepsize matrix as Γ = blkdiag(Γ1,Γ2) with Γ1 ≻ 0, Γ2 ≻ 0, and the nonlinear part of the preconditioner as Q̂u(z̄, ξ) B ( 0,−θ∇yφ̂(x̄, y, ξ) ) , and Qu(z̄) B ( 0,−θ∇yφ(x̄, y) ) (8.7) where u = (x, y) and z̄ = (x̄, ȳ). Recall Hu(z) B Pu(z) − F(z) and define S u(z; z̄) B Hu(z) − Qu(z̄). The convergence in Theorem 8.2 depends on the distance between the initial estimate Γ−1ẑ−1 with ẑ−1 = (x̂−1, ŷ−1) and the deterministic S z−1 (z−1; z̄−1). See Appendix B for additional notation. Theorem 8.2. Suppose that Assumption I(iii) to II(ii) and IV hold. Moreover, suppose that αk ∈ (0, 1), θ ∈ [0,∞) and the following holds, µ B 1−√α0 1+ √ α0 + 2ρ γ̄ − α0 − 2α0(ĉ1 + 2ĉ2(1 + ĉ3))η > 0 and 1 − 4ĉ2α0 > 0 (8.8) where γ̄ denotes the smallest eigenvalue of Γ, η ≥ (1 + 4ĉ2α20)( 1√ α0(1−LM )2 + 1−√α0√ α0 )/(1 − 4ĉ2α0) and ĉ1 B L2x̂z∥ΓDx̂z∥ + 2(1 − θ) 2L2ŷz∥ΓDŷz∥ + 2θ 2L2ŷy∥Γ2Dŷy∥, ĉ2 B 2θ 2L2ŷx∥Γ1Dŷx∥, ĉ3 B L 2 x̂z∥ΓDx̂z∥, L2M B max { L2xx∥DxxΓ1∥ + L2yx∥DyxΓ1∥, ∥L2xy∥DxyΓ2∥ + L2yy∥DyyΓ2∥ } . Consider the sequence (zk)k∈ generated by Algorithm 3. Then, the following holds for all z ⋆ ∈ S⋆ E[distΓ(0,T z̄k⋆ )2] ≤ E[∥z0 − z⋆∥2 Γ−1 ] + ηE[∥Γ−1ẑ−1 − S z−1 (z−1; z̄−1)∥2Γ] +Cσ2F ∑K j=0 α 2 j µ ∑K j=0 α j where C B 2(η+α0( 1√α0(1−LM )2 + 1−√α0√ α0 ))(1+ 2ĉ2)+ 1+ 2(ĉ1 + 2ĉ2(Θ+ ĉ3))η with Θ = (1− θ)2 + 2θ2 and k⋆ is chosen from {0, 1, . . . ,K} according to probability P[k⋆ = k] = αk∑K j=0 α j . Remark 5. When α0 → 0 the conditions in (8.2) reduces to 1 + 2ργ̄ > 0 as in the deterministic case. For θ = 0 Algorithm 3 reduces to Algorithm 2. With this choice Theorem 8.2 simplifies, since the constant ĉ2 = 0, and we recover the convergence result of Theorem 7.1. 9 Experiments We compare BC-SEG+ and BC-PSEG+ against (EG+) using stochastic feedback (which we refer to as (SF-EG+)) and (SEG) in both an unconstrained setting and a constrained setting introduced in Pethick et al. (2022). See Appendix H.2 for the precise formulation of the projected variants which we denote (SF-PEG+) and (PSEG) respectively. In the unconstrained example we control all problem constant and set ρ = −1/10LF , while the constrained example is a specific minimax problem where ρ > −1/2LF holds within the constrained set for a Lipschitz constant LF restricted to the same constrained set. To simulate a stochastic setting in both examples, we consider additive Gaussian noise, i.e. F̂(z, ξ) = Fz + ξ where ξ ∼ N(0, σ2I). In the experiments we choose σ = 0.1 and αk ∝ 1/k, which ensures almost sure convergence of BC-(P)SEG+. For a more aggressive stepsize choice αk ∝ 1/ √ k see Figure 4. Further details can be found in Appendix H. The results are shown in Figure 2. The sequence generated by (SEG) and (PSEG) diverges for the unconstrained problem and cycles in the constrained problem respectively. In comparison (SF-EG+) and (SF-PEG+) gets within a neighborhood of the solutions but fails to converge due to the nondiminishing stepsize, while BC-SEG+ and BC-PSEG+ converges in the examples. 10 Conclusion This paper shows that nonconvex-nonconcave problems characterize by the weak Minty variational inequality can be solved efficiently even when only stochastic gradients are available. The approach crucially avoids increasing batch sizes by instead introducing a bias-correction term. We show that convergence is possible for the same range of problem constant ρ ∈ (−γ/2,∞) as in the deterministic case. Rates are established for a random iterate, which matches those of stochastic extragradient in the monotone case, and the result is complemented with almost sure convergence, thus providing asymptotic convergence for the last iterate. We show that the idea extends to a family of extragradient-type methods which includes a nonlinear extension of the celebrated primal dual hybrid gradient (PDHG) algorithm. For future work it is interesting to see if the rate can be improved by considering accelerated methods such as Halpern iterations. 11 Acknowledgments and disclosure of funding This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n° 725594 - time-data). This work was supported by the Swiss National Science Foundation (SNSF) under grant number 200021_205011. The work of the third and fourth author was supported by the Research Foundation Flanders (FWO) postdoctoral grant 12Y7622N and research projects G081222N, G033822N, G0A0920N; Research Council KU Leuven C1 project No. C14/18/068; European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 953348. The work of Olivier Fercoq was supported by the Agence National de la Recherche grant ANR-20-CE40-0027, Optimal Primal-Dual Algorithms (APDO). References Ahmet Alacaoglu and Yura Malitsky. Stochastic variance reduction for variational inequality methods. arXiv preprint arXiv:2102.08352, 2021. Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Nathan Srebro, and Blake Woodworth. Lower bounds for non-convex stochastic optimization. arXiv preprint arXiv:1912.02365, 2019. Heinz H. Bauschke and Patrick L. Combettes. Convex analysis and monotone operator theory in Hilbert spaces. CMS Books in Mathematics. Springer, 2017. ISBN 978-3-319-48310-8. Heinz H Bauschke, Walaa M Moursi, and Xianfu Wang. Generalized monotone operators and their averaged resolvents. Mathematical Programming, 189(1):55–74, 2021. Dimitri P. Bertsekas. Incremental proximal methods for large scale convex optimization. Mathematical programming, 129(2):163–195, 2011. Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, and Nicolas Loizou. Stochastic gradient descent-ascent: Unified theory and new efficient methods. arXiv preprint arXiv:2202.07262, 2022. Axel Böhm. Solving nonconvex-nonconcave min-max problems exhibiting weak minty solutions. arXiv preprint arXiv:2201.12247, 2022. Radu Ioan Boţ, Panayotis Mertikopoulos, Mathias Staudigl, and Phan Tu Vuong. Minibatch forwardbackward-forward methods for solving stochastic variational inequalities. Stochastic Systems, 11 (2):112–139, 2021. Xufeng Cai, Chaobing Song, Cristóbal Guzmán, and Jelena Diakonikolas. A stochastic Halpern iteration with variance reduction for stochastic monotone inclusion problems. arXiv preprint arXiv:2203.09436, 2022. A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011. Tianyi Chen, Yuejiao Sun, and Wotao Yin. Solving stochastic compositional optimization is nearly as easy as solving stochastic optimization. IEEE Transactions on Signal Processing, 69:4937– 4948, 2021. Patrick L Combettes and Teemu Pennanen. Proximal methods for cohypomonotone operators. SIAM journal on control and optimization, 43(2):731–742, 2004. Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis. The complexity of constrained min-max optimization. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 1466–1478, 2021. Jelena Diakonikolas, Constantinos Daskalakis, and Michael Jordan. Efficient methods for structured nonconvex-nonconcave min-max optimization. In International Conference on Artificial Intelligence and Statistics, pp. 2746–2754. PMLR, 2021. Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. Advances in Neural Information Processing Systems, 31, 2018. Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013. Eduard Gorbunov, Hugo Berard, Gauthier Gidel, and Nicolas Loizou. Stochastic extragradient: General analysis and improved rates. In International Conference on Artificial Intelligence and Statistics, pp. 7865–7901. PMLR, 2022. M Hirsch and S Vavasis. Exponential lower bounds for finding Brouwer fixed points. In Proceedings of the 28th Symposium on Foundations of Computer Science, pp. 401–410, 1987. Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. On the convergence of single-call stochastic extra-gradient methods. Advances in Neural Information Processing Systems, 32, 2019. Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. Explore aggressively, update conservatively: Stochastic extragradient methods with variable stepsize scaling. arXiv preprint arXiv:2003.10162, 2020. Anatoli Juditsky, Arkadi Nemirovski, and Claire Tauvel. Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Systems, 1(1):17–58, 2011. Aswin Kannan and Uday V Shanbhag. Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants. Computational Optimization and Applications, 74(3):779–820, 2019. Puya Latafat and Panagiotis Patrinos. Asymmetric forward–backward–adjoint splitting for solving monotone inclusions involving three operators. Computational Optimization and Applications, 68(1):57–93, Sep 2017. Sucheol Lee and Donghwan Kim. Fast extra gradient methods for smooth structured nonconvexnonconcave minimax problems. arXiv preprint arXiv:2106.02326, 2021. Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, and Michael Jordan. On the convergence of stochastic extragradient for bilinear games using restarted iteration averaging. In International Conference on Artificial Intelligence and Statistics, pp. 9793–9826. PMLR, 2022. Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, and Ioannis Mitliagkas. Stochastic hamiltonian gradient methods for smooth games. In International Conference on Machine Learning, pp. 6370–6381. PMLR, 2020. Nicolas Loizou, Hugo Berard, Gauthier Gidel, Ioannis Mitliagkas, and Simon Lacoste-Julien. Stochastic gradient descent-ascent and consensus optimization for smooth games: Convergence analysis under expected co-coercivity. Advances in Neural Information Processing Systems, 34: 19095–19108, 2021. Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, and Yura Malitsky. Revisiting stochastic extragradient. In International Conference on Artificial Intelligence and Statistics, pp. 4573–4582. PMLR, 2020. Lam M Nguyen, Marten van Dijk, Dzung T Phan, Phuong Ha Nguyen, Tsui-Wei Weng, and Jayant R Kalagnanam. Finite-sum smooth optimization with SARAH. arXiv preprint arXiv:1901.07648, 2019. Thomas Pethick, Puya Latafat, Panagiotis Patrinos, Olivier Fercoq, and Volkan Cevher. Escaping limit cycles: Global convergence for constrained nonconvex-nonconcave minimax problems. In International Conference on Learning Representations, 2022. Ralph Tyrell Rockafellar. Convex analysis. Princeton University Press, 1970. Chaobing Song, Zhengyuan Zhou, Yichao Zhou, Yong Jiang, and Yi Ma. Optimistic dual extrapolation for coherent non-monotone variational inequalities. Advances in Neural Information Processing Systems, 33:14303–14314, 2020. P. Tseng. A modified forward-backward splitting method for maximal monotone mappings. SIAM Journal on Control and Optimization, 38(2):431–446, 2000. Junchi Yang, Negar Kiyavash, and Niao He. Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems. arXiv preprint arXiv:2002.09621, 2020. Appendix Table of Contents A Prelude 14 B Preliminaries 14 C Proof for SEG+ 15 Proof of Theorem 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 D Proof for smooth unconstrained case 16 Proof of Theorem D.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Proof of Theorem 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Proof of Theorem D.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Proof of Theorem 6.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 E Proof for constrained case 21 Proof of Theorem E.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Proof of Theorem 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 F Proof for NP-PDEG through a nonlinear asymmetric preconditioner 23 F.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 F.2 Deterministic lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 F.3 Stochastic results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Proof of Theorem F.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Proof of Theorem 8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 F.4 Explanation of bias-correction term . . . . . . . . . . . . . . . . . . . . . . . . . 30 G Negative weak Minty variational inequality 31 H Experiments 32 H.1 Synthetic example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 H.2 Additional algorithmic details . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 I Comparison with variance reduction 34 A Prelude For the unconstrained and smooth setting Appendix C treats convergences of (SEG+) for the restricted case where F is linear. Appendix D shows both random iterate results and almost sure convergence of Algorithm 1. Theorems 6.1 and 6.3 in the main body are implied by the more general results in this section, which preserves certain free parameters and more general stepsize requirements. Appendices E and F moves beyond the unconstrained and smooth case by showing convergence for instances of the template scheme (8.1). The analysis of Algorithm 3 in Appendix F applies to Algorithm 2, but for completeness we establish convergence for general F separately in Appendix E. The relationship between the theorems are presented in Table 1. B Preliminaries Given a psd matrix V we define the inner product as ⟨·, ·⟩V B ⟨·,V ·⟩ and the corresponding norm ∥ · ∥ B √ ⟨·, ·⟩V . The distance from u ∈ n to a setU ⊆ n with respect to a positive definite matrix V is defined as distV (u,U) B minu′∈U ∥u − u′∥V , which we simply denote dist(u,U) when V = I. The norm ∥X∥ refers to spectral norm when X is a matrix. We summarize essential definitions from operator theory, but otherwise refer to Bauschke & Combettes (2017); Rockafellar (1970) for further details. An operator A : n ⇒ d maps each point x ∈ n to a subset Ax ⊆ d, where the notation A(x) and Ax will be used interchangably. We denote the domain of A by dom A B {x ∈ n | Ax , ∅}, its graph by gph A B {(x, y) ∈ n × d | y ∈ Ax}. The inverse of A is defined through its graph, gph A−1 B {(y, x) | (x, y) ∈ gph A} and the set of its zeros by zer A B {x ∈ n | 0 ∈ Ax}. Definition B.1 ((co)monotonicity Bauschke et al. (2021)). An operator A : n ⇒ n is said to be ρ-monotone for some ρ ∈ , if for all (x, y), (x′, y′) ∈ gph A ⟨y − y′, x − x′⟩ ≥ ρ∥x − x′∥2, and it is said to be ρ-comonotone if for all (x, y), (x′, y′) ∈ gph A ⟨y − y′, x − x′⟩ ≥ ρ∥y − y′∥2. The operator A is said to be maximally (co)monotone if there exists no other (co)monotone operator B for which gph A ⊂ gph B properly. If A is 0-monotone we simply say it is monotone. When ρ < 0, ρ-comonotonicity is also referred to as |ρ|-cohypomonotonicity. Definition B.2 (Lipschitz continuity and cocoercivity). Let D ⊆ n be a nonempty subset of n. A single-valued operator A : D → n is said to be L-Lipschitz continuous if for any x, x′ ∈ D ∥Ax − Ax′∥ ≤ L∥x − x′∥, and β-cocoercive if ⟨x − x′, Ax − Ax′⟩ ≥ β∥Ax − Ax′∥2. Moreover, A is said to be nonexpansive if it is 1-Lipschitz continuous, and firmly nonexpansive if it is 1-cocoercive. A β-cocoercive operator is also β−1-Lipschitz continuity by direct implication of Cauchy-Schwarz. The resolvent operator JA = (id + A)−1 is firmly nonexpansive (with dom JA = n) if and only if A is (maximally) monotone. We will make heavy use of the Fenchel-Young inequality. For all a, b ∈ n and e > 0 we have, 2⟨a, b⟩ ≤ e∥a∥2 + 1e ∥b∥ 2 (B.1) ∥a + b∥2 ≤ (1 + e)∥a∥2 + (1 + 1e )∥b∥ 2 (B.2) −∥a − b∥2 ≤ − 11+e ∥a∥ 2 + 1e ∥b∥ 2 (B.3) C Proof for SEG+ Proof of Theorem 5.1. Following (Hsieh et al., 2020) closely, define the reference state ˜̄zk := zk − γFzk to be the exploration step using the deterministic operator and denote the second stepsize as ηk := αkγ. We will let ζ denote the additive noise term, i.e. F̂(z, ξ) := F(z) + ζ. Expanding the distance to solution, ∥zk+1 − z⋆∥2 = ∥zk − ηkF̂(z̄k, ξ̄k) − z⋆∥2 = ∥zk − z⋆∥2 − 2ηk⟨F̂(z̄k, ξ̄k), zk − z⋆⟩ + η2k∥F̂(z̄k, ξ̄k)∥2 = ∥zk − z⋆∥2 − 2ηk⟨F̂(z̄k, ξ̄k), ˜̄zk − z⋆⟩ − 2γηk⟨F̂(z̄k, ξ̄k), F(zk)⟩ + η2k∥F̂(z̄k, ξ̄k)∥2. (C.1) Recall that the operator is assumed to be linear Fz = Bz + v in which case we have, F̂(z̄k, ξ̄k) = Bz̄k + v + ζ̄k =B(zk − γF̂(zk, ξk)) + v + ζ̄k =B(zk − γBzk − γv − γζk) + v + ζ̄k =B(zk − γ(Bzk + v)) + v − γBζk + ζ̄k =F(˜̄zk) − γBζk + ζ̄k. (C.2) The two latter terms are zero in expectation due to the unbiasedness from Assumption II(ii), which lets us write the terms on the RHS of (C.1) as, −Ek⟨F̂(z̄k, ξ̄k), ˜̄zk − z⋆⟩ = −⟨F(˜̄zk), ˜̄zk − z⋆⟩ (C.3) −Ek⟨F̂(z̄k, ξ̄k), F(zk)⟩ = −⟨F(˜̄zk), F(zk)⟩ (C.4) Ek∥F̂(z̄k, ξ̄k)∥2 = ∥F(˜̄zk)∥2 + Ek∥γBζk∥2 + Ek∥ζ̄k∥2. (C.5) We can bound (C.3) directly through the weak MVI in Assumption I(iii) which might still be positive, −⟨F(˜̄zk), ˜̄zk − z⋆⟩ ≤ −ρ∥F(˜̄zk)∥2. (C.6) For the latter two terms of (C.5) we have Ek∥γBζk∥2 + Ek∥ζ̄k∥2 = γ2Ek∥F(ζk) − F(0)∥2 + Ek∥ζ̄k∥2 ≤ (γ2L2F + 1)σ2F , (C.7) where the last inequality follows from Lipschitz in Assumption I(i) and bounded variance in Assumption II(iii). Combining everything into (C.1) we are left with Ek∥zk+1 − z⋆∥2 ≤ ∥zk − z⋆∥2 + η2k(γ2L2F + 1)σ2F − 2γηk⟨F(˜̄zk), F(zk)⟩ + (η2k − 2ηkρ)∥F(˜̄zk)∥2 (C.8) By assuming the stepsize condition, ρ ≥ (ηk − γ)/2, we have η2k − 2ηkρ ≤ γηk. This allows us to complete the square, −2γηk⟨F(˜̄zk), F(zk)⟩ + (η2k − 2ηkρ)∥F(˜̄zk)∥2 ≤ −2γηk⟨F(˜̄zk), F(zk)⟩ + γηk∥F(˜̄zk)∥2 = γηk(∥F(zk) − F(˜̄zk)∥2 − ∥F(zk)∥2) ≤ γηk(γ2L2F − 1)∥F(zk)∥2, (C.9) where the last inequality follows from Lipschitzness of F and the definition of the update rule. Plugging into (C.8) we are left with Ek∥zk+1 − z⋆∥2 ≤ ∥zk − z⋆∥2 + η2k(γ2L2F + 1)σ2F − γηk(1 − γ2L2F)∥F(zk)∥2. (C.10) The result is obtained by total expectation and summing. D Proof for smooth unconstrained case Lemma D.1. Consider the recurrent relation Bk+1 = ξkBk + dk such that ξk > 0 for all k ≥ 0. Then Bk+1 = ( Πkp=0ξp )B0 + k∑ ℓ=0 dℓ Πℓp=0ξp . Assumption V. γ ∈ (⌊−2ρ⌋+, 1/LF) and for positive real valued b, µ B γ2(1 − γ2L2F(1 + b−1)) > 0. (D.1) Theorem D.2. Suppose that Assumptions I to III hold. Suppose in addition that Assumption V holds and that (αk)k∈ ⊂ (0, 1) is a diminishing sequence such that 2γLF̂ √ α0 + ( 1 + ( (b + 1)γ2L2F ) γ2L2 F̂ ) α0 ≤ 1 + 2ργ . (D.2) Consider the sequence (zk)k∈ generated by Algorithm 1. Then, the following estimate holds K∑ k=0 αk∑K j=0 α j E[∥F(zk)∥2] ≤ ∥z0 − z⋆∥2 + ηγ2∥F(z0)∥2 +Cσ2Fγ2 ∑K j=0 α 2 j µ ∑K j=0 α j , (D.3) where C = 1 + 2η ( (γ2L2 F̂ + 1) + 2α0 ) and η = 12 (b + 1)γ 2L2F + 1 γLF̂ √ α0 . Proof of Theorem D.2. The proof relies on establishing a (stochastic) descent property on the following potential function Uk+1 B ∥zk+1 − z⋆∥2 + Ak+1∥uk∥2 + Bk+1∥zk+1 − zk∥2. where uk B z̄k− zk+γF(zk) measures the difference of the bias-corrected step from the deterministic exploration step, and (Ak)k∈ , (Bk)k∈ are positive scalar parameters to be identified. We proceed to consider each term individually. Let us begin by quantifying how well z̄k estimates zk − γF(zk). uk = z̄k − zk + γF(zk) = γF(zk) − γF̂(zk, ξk) + (1 − αk)(z̄k−1 − zk−1 + γF̂(zk−1, ξk)). Therefore, ∥uk∥2 = ∥γF(zk) − γF̂(zk, ξk) + (1 − αk)(γF̂(zk−1, ξk) − γF(zk−1))∥2 + (1 − αk)2∥uk−1∥2 + 2(1 − αk)⟨z̄k−1 − zk−1 + γF(zk−1), γF(zk) − γF̂(zk, ξk) + (1 − αk)(γF̂(zk−1, ξk) − γF(zk−1))⟩. Conditioned on Fk, in the inner product the left term is known and the right term has an expectation that equals zero. Therefore, we obtain E[∥uk∥2 |Fk]=E[∥(1−αk) ( γF(zk)−γF̂(zk,ξk)+γF̂(zk−1,ξk)−γF(zk−1) ) +αk ( γF(zk)−γF̂(zk,ξk) ) ∥2 |Fk] +(1−αk)2∥uk−1∥2 ≤(1−αk)2∥uk−1∥2+2(1−αk)2γ2E[∥F̂(zk,ξk)−F̂(zk−1,ξk)∥2 |Fk] +2α2kγ 2E[∥F(zk)−F̂(zk,ξk)∥2 |Fk] ≤(1−αk)2∥uk−1∥2+2(1−αk)2γ2L2F̂∥z k−zk−1∥2+2α2kγ2σ2F (D.4) where in the first inequality we used Young inequality and the fact that the second moment is larger than the variance, and Assumptions II(iii) and III were used in the second inequality. By step 1.4, the equality ∥zk+1 − z⋆∥2 = ∥zk − z⋆∥2 − 2αkγ⟨F̂(z̄k, ξ̄k), zk − z⋆⟩ + α2kγ2∥F̂(z̄k, ξ̄k)∥2, (D.5) holds. The inner product in (D.5) can be upper bounded using Young inequalities with positive parameters εk, k ≥ 0, and b as follows. E[⟨−γF̂(z̄k, ξ̄k), zk − z⋆⟩ | F̄k] = − γ⟨F(z̄k), zk − z̄k⟩ − γ⟨F(z̄k), z̄k − z⋆⟩ = − γ2⟨F(z̄k), F(zk)⟩ + γ⟨F(z̄k), z̄k − zk + γF(zk)⟩ − γ⟨F(z̄k), z̄k − z⋆⟩ ≤ γ2 (1 2 ∥F(z̄k) − F(zk)∥2 − 1 2 ∥F(z̄k)∥2 − 1 2 ∥F(zk)∥2 ) + γ2εk 2 ∥F(z̄k)∥2 + 1 2εk ∥z̄k − zk + γF(zk)∥2 − γρ∥F(z̄k)∥2 ≤ γ2L2F 1 + b 2 ∥uk∥2 + 1 + b −1 2 γ4L2F∥F(zk)∥2 − γ2 2 ∥F(z̄k)∥2 − γ 2 2 ∥F(zk)∥2 + γ 2εk 2 ∥F(z̄k)∥2 + 1 2εk ∥uk∥2 − γρ∥F(z̄k)∥2 = ( γ2L2F 1 + b 2 + 1 2εk )∥uk∥2 + γ2(γ2L2F(1 + b−1) − 1) 2 ∥F(zk)∥2 + (γ2(εk − 1) 2 − γρ)∥F(z̄k)∥2. (D.6) Conditioning (D.6) with E [· | Fk] = E[E[· | F̄k] | Fk], since Fk ⊂ F̄k, yields 2E[⟨−γF̂(z̄k, ξ̄k), zk − z⋆⟩ | Fk] ≤ ( γ2L2F(1 + b) + 1 εk ) E[∥uk∥2 | Fk] − µ∥F(zk)∥2 + ( γ2(εk − 1) − 2γρ ) E [ ∥F(z̄k)∥2 | Fk ] , (D.7) where µ was defined in (D.1). The condition expectation of the third term in (D.5) is bounded through Assumption II(iii) by E [ ∥F̂(z̄k, ξ̄k)∥2 | Fk ] = E [ E[∥F̂(z̄k, ξ̄k)∥2 | F̄k] | Fk ] ≤ ∥F(z̄k)∥2 + σ2F , which in turn implies E [ ∥zk+1 − zk∥2 | Fk ] = α2kγ 2E [ ∥F̂(z̄k, ξ̄k)∥2 | Fk ] ≤ α2kγ2E [ ∥Fz̄k∥2 | Fk ] + α2kγ 2σ2F (D.8) Combining (D.7), (D.8), and (D.5) yields E[∥zk+1 − z⋆∥2 + Ak+1∥uk∥2 + Bk+1∥zk+1 − zk∥2 | Fk] ≤ ∥zk − z⋆∥2 + ( Ak+1 + αk ( γ2L2F(1 + b) + 1 εk )) E[∥uk∥2 | Fk] − αkµ∥F(zk)∥2 + ( αk ( γ2(εk − 1) − 2γρ ) + α2kγ 2 ) E [ ∥F(z̄k)∥2 | Fk ] + α2kγ 2σ2F + Bk+1α2kγ 2E [ ∥Fz̄k∥2 | Fk ] + Bk+1α2kγ 2σ2F . (D.9) Further using (D.4) and denoting Xk1 B αk ( γ2L2F(1 + b) + 1 εk ) + Ak+1, Xk2 B αk ( γ2(εk − 1) − 2ργ + αk γ2 ) leads to E[Uk+1 | Fk] −Uk ≤ − αkµ∥F(zk)∥2 + ( Xk1(1 − αk)2 − Ak ) ∥uk−1∥2 + ( 2Xk1(1 − αk)2γ2L2F̂ − Bk ) ∥zk − zk−1∥2 + ( Xk2 + Bk+1α 2 kγ 2 ) E [ ∥F(z̄k)∥2 | Fk ] + ( Bk+1α2k + α 2 k + 2X k 1α 2 k ) γ2σ2F . (D.10) Having established (D.10), set Ak = A, Bk = 2Aγ2L2F̂ , and εk = ε to obtain by the law of total expectation that E[Uk+1] − E[Uk] ≤ − αkµE [ ∥F(zk)∥2 ] + ( Xk1(1 − αk)2 − A ) E [ ∥uk−1∥2 ] + 2γ2L2 F̂ ( Xk1(1 − αk)2 − A ) E [ ∥zk − zk−1∥2 ] + ( Xk2 + 2Aγ 4L2 F̂ α2k ) E [ ∥F(z̄k)∥2 ] + ( 2Aγ2L2 F̂ + 1 + 2Xk1 ) α2kγ 2σ2F . (D.11) To get a recursion we require Xk1(1 − αk)2 − A ≤ 0 and Xk2 + 2Aγ4L2F̂α 2 k ≤ 0. (D.12) By developing the first requirement of (D.12) we have, 0 ≥ Xk1(1 − αk)2 − A = αk(1 − αk)2 ( γ2L2F(1 + b) + 1 ε ) + αk(αk − 2)A. (D.13) Equivalently, A needs to satisfy A ≥ (1 − αk) 2 2 − αk ( γ2L2F(1 + b) + 1 ε ) . (D.14) for any αk ∈ (0, 1). Since (1−αk) 2 2−αk ≤ 1 2 given αk ∈ (0, 1) it suffice to pick A = 12 ( (b + 1)γ2L2F + 1 ε ) . (D.15) For the second requirement of (D.12) note that we can equivalently require that the following quantity is negative 1 αkγ2 ( Xk2 + 2Aγ 4L2 F̂ α2k ) = ε − 1 − 2ρ γ + αk + 2Aγ2L2F̂αk ≤ ε − 1 − 2ρ γ + ( 1 + ( (b + 1)γ2L2F + 1 ε ) γ2L2 F̂ ) α0 where we have used that αk ≤ α0 and the choice of A from (D.15). Setting the Young parameter ε = γLF̂ √ α0 we obtain that Xk2 + 2Aγ 4L2 F̂ α2k ≤ 0 owing to (D.2). On the other hand, the last term in (D.11) may be upper bounded by 2Aγ2L2 F̂ + 1 + 2Xk1 = 1 + ( (b + 1)γ2L2F + 1 γLF̂ √ α0 )( (γ2L2 F̂ + 1) + 2αk ) ≤ 1 + ( (b + 1)γ2L2F + 1 γLF̂ √ α0 )( (γ2L2 F̂ + 1) + 2α0 ) = C. Thus, it follows from (D.11) that E[Uk+1] − E[Uk] ≤ − αkµE [ ∥F(zk)∥2 ] +Cα2kγ 2σ2F . Telescoping the above inequality completes the proof. Proof of Theorem 6.1. The theorem is obtained as a particular instantiation of Theorem D.2. The condition in (D.1) can be rewritten as b > γ 2L2F 1−γ2L2F . A reasonable choice is b = 2γ 2L2F 1−γ2L2F . Substituting back into µ we obtain µ = γ2(1 − γ2L2F(1 + 1−γ2L2F 2γ2L2F )) = γ 2(1−γ2L2F ) 2 > 0. (D.16) Similarly, the choice of b is substituted into η and (D.2) of Theorem D.2. The rate in (D.2) is further simplified by applying Lipschitz continuity of F from Assumption I(i) to ∥Fz0∥2 = ∥Fz0 − Fz⋆∥2. The proof is complete by observing that the guarantee on the weighted sum can be converted into an expectation over a sampled iterate in the style of Ghadimi & Lan (2013). Assumption VI (almost sure convergence). Let d ∈ [0, 1], b > 0. Suppose that the following holds (i) the diminishing sequence (αk
1. What is the main contribution of the paper regarding nonconvex-nonconcave problems? 2. What are the strengths of the proposed BCSEG+ algorithm compared to prior works? 3. Do you have any concerns or suggestions regarding the literature review or terminology usage in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper For the first time, the authors introduces a family of stochastic extragradient-type algorithms that positively solves a class of nonconvex-nonconcave problems which can be cast as stochastic weak Minty variational inequality (MVI). In the monotone setting, extragradient methods adopt constant stepsizes and bounded batchsizes (both of which are critical in practical performances), and when extending to the weak MVI setting, only theories adopting expensive increasing batch sizes per iteration approaches are available. Strengths And Weaknesses Strength This work answers affirmatively an open problem by proposing a bias-corrected stochastic extragradient (BCSEG+) algorithm that solves stochastic weak Minty variational inequalities without increasing the batch size. As the authors indicated, Pethick et al. (2022) "suffices in the special case of unconstrained quadratic games but can fail even in the monotone case ...". Also, earlier works such as Hsieh et al. (2020) adopt diminishing but larger exploration stepsize and smaller updating stepsize. Weakness There is not much from my perspective, as long as the proof is correct (which I took a high-leve look at but did not go into all details). Two small comments: --MVI can be short for "monotone" variational inequality instead of "Minty" variational inequality. Adopting this shorthand as in some earlier work might cause unnecessary confusion. Therefore, I would suggest the authors avoid this shorthand as much as possible. --The authors should do more literature reviews. Missing reference includes but not limited to "Bot et al., Minibatch Forward-Backward-Forward Methods for Solving Stochastic Variational Inequalities, 2021" Clarity, Quality, Novelty And Reproducibility The authors did a good job in all these given aspects.
ICLR
Title Solving stochastic weak Minty variational inequalities without increasing batch size Abstract This paper introduces a family of stochastic extragradient-type algorithms for a class of nonconvex-nonconcave problems characterized by the weak Minty variational inequality (MVI). Unlike existing results on extragradient methods in the monotone setting, employing diminishing stepsizes is no longer possible in the weak MVI setting. This has led to approaches such as increasing batch sizes per iteration which can however be prohibitively expensive. In contrast, our proposed methods involves two stepsizes and only requires one additional oracle evaluation per iteration. We show that it is possible to keep one fixed stepsize while it is only the second stepsize that is taken to be diminishing, making it interesting even in the monotone setting. Almost sure convergence is established and we provide a unified analysis for this family of schemes which contains a nonlinear generalization of the celebrated primal dual hybrid gradient algorithm. N/A 1 Introduction Stochastic first-order methods have been at the core of the current success in deep learning applications. These methods are mostly well-understood for minimization problems at this point. This is even the case in the nonconvex setting where there exists matching upper and lower bounds on the complexity for finding an approximately stable point (Arjevani et al., 2019). The picture becomes less clear when moving beyond minimization into nonconvex-nonconcave minimax problems—or more generally nonmonotone variational inequalities. Even in the deterministic case, finding a stationary point is in general intractable (Daskalakis et al., 2021; Hirsch & Vavasis, 1987). This is in stark contrast with minimization where only global optimality is NP-hard. An interesting nonmonotone class for which we do have efficient algorithms is characterized by the so called weak Minty variational inequality (MVI) (Diakonikolas et al., 2021). This problem class captures nontrivial structures such as attracting limit cycles and is governed by a parameter ρ whose negativity increases the degree of nonmonotonicity. It turns out that the stepsize γ for the exploration step in extragradient-type schemes lower bounds the problem class through ρ > −γ/2 (Pethick et al., 2022). In other words, it seems that we need to take γ large to guarantee convergence for a large class. This reliance on a large stepsize is at the core of why the community has struggled to provide a stochastic variants for weak MVIs. The only known results effectively increase the batch size at every iteration (Diakonikolas et al., 2021, Thm. 4.5)—a strategy that would be prohibitively expensive in most machine learning applications. Pethick et al. (2022) proposed (SEG+) which attempts to tackle the noise by only diminishing the second stepsize. This suffices in the special case of unconstrained quadratic games but can fail even in the monotone case as illustrated in Figure 1. This naturally raises the following research question: Can stochastic weak Minty variational inequalities be solved without increasing the batch size? We resolve this open problem in the affirmative when the stochastic oracles are Lipschitz in mean, with a modification of stochastic extragradient called bias-corrected stochastic extragradient (BCSEG+). The scheme only requires one additional first order oracle call, while crucially maintaining the fixed stepsize. Specifically, we make the following contributions: ∗Laboratory for Information and Inference Systems (LIONS), EPFL ([email protected]) †Laboratoire Traitement et Communication d’Information, Télécom Paris, Institut Polytechnique de Paris ‡Department of Electrical Engineering (ESAT-STADIUS), KU Leuven (i) We show that it is possible to converge for weak MVI without increasing the batch size, by introducing a bias-correction term. The scheme introduces no additional hyperparameters and recovers the maximal range ρ ∈ (−γ/2,∞) of explicit deterministic schemes. The rate we establish is interesting already in the star-monotone case where only asymptotic convergence of the norm of the operator was known when refraining from increasing the batch size (Hsieh et al., 2020, Thm. 1). Our result additionally carries over to another class of problem treated in Appendix G, which we call negative weak MVIs. (ii) We generalize the result to a whole family of schemes that can treat constrained and regularized settings. First and foremost the class includes a generalization of the forward-backwardforward (FBF) algorithm of Tseng (2000) to stochastic weak MVIs. The class also contains a stochastic nonlinear extension of the celebrated primal dual hybrid gradient (PDHG) algorithm (Chambolle & Pock, 2011). Both methods are obtained as instantiations of the same template scheme, thus providing a unified analysis and revealing an interesting requirement on the update under weak MVI when only stochastic feedback is available. (iii) We prove almost sure convergence under the classical Robbins-Monro stepsize schedule of the second stepsize. This provides a guarantee on the last iterate, which is especially important in the nonmonotone case, where average guarantees cannot be converted into a single candidate solution. Almost sure convergence is challenging already in the monotone case where even stochastic extragradient may not converge (Hsieh et al., 2020, Fig. 1). 2 Related work Weak MVI Diakonikolas et al. (2021) was the first to observe that an extragradient-like scheme called extragradient+ (EG+) converges globally for weak MVIs with ρ ∈ (−1/8LF ,∞). This results was later tightened to ρ ∈ (−1/2LF ,∞) and extended to constrained and regularized settings in (Pethick et al., 2022). A single-call variant has been analysed in Böhm (2022). Weak MVI is a star variant of cohypomonotonicity, for which an inexact proximal point method was originally studied in Combettes & Pennanen (2004). Later, a tight characterization was carried out by Bauschke et al. (2021) for the exact case. It was shown that acceleration is achievable for an extragradient-type scheme even for cohypomonotone problems (Lee & Kim, 2021). Despite this array of positive results the stochastic case is largely untreated for weak MVIs. The only known result (Diakonikolas et al., 2021, Theorem 4.5) requires the batch size to be increasing. Similarly, the accelerated method in Lee & Kim (2021, Thm. 6.1) requires the variance of the stochastic oracle to decrease as O(1/k). Stochastic & monotone When more structure is present the story is different since diminishing stepsizes becomes permissible. In the monotone case rates for the gap function was obtained for stochastic Mirror-Prox in Juditsky et al. (2011) under bounded domain assumption, which was later relaxed for the extragradient method under additional assumptions (Mishchenko et al., 2020). The norm of the operator was shown to asymptotically converge for unconstrained MVIs in Hsieh et al. (2020) with a double stepsize policy. There exists a multitude of extensions for monotone problems: Single-call stochastic methods are covered in detail by Hsieh et al. (2019), variance reduction was applied to Halpern-type iterations (Cai et al., 2022), cocoercivity was used in Beznosikov et al. (2022), and bilinear games studied in Li et al. (2022). Beyond monotonicity, a range of structures have been explored such as MVIs (Song et al., 2020), pseudomonotonicity (Kannan & Shanbhag, 2019; Boţ et al., 2021), two-sided Polyak-Łojasiewicz condition (Yang et al., 2020), expected cocoercivity (Loizou et al., 2021), sufficiently bilinear (Loizou et al., 2020), and strongly star-monotone (Gorbunov et al., 2022). Variance reduction The assumptions we make about the stochastic oracle in Section 3 are similar to what is found in the variance reduction literature (see for instance Alacaoglu & Malitsky (2021, Assumption 1) or Arjevani et al. (2019)). However, our use of the assumption are different in a crucial way. Whereas the variance reduction literature uses the stepsize γ ∝ 1/LF̂ (see e.g. Alacaoglu & Malitsky (2021, Theorem 2.5)), we aim at using the much larger γ ∝ 1/LF . For instance, in the special case of a finite sum problem of size N, the mean square smoothness constant LF̂ from Assumption III can be √ N times larger than LF (see Appendix I for details). This would lead to a prohibitively strict requirement on the degree of allowed nonmonotonicity through the relationship ρ > −γ/2. Bias-correction The idea of adding a correction term has also been exploited in minimization, specifically in the context of compositional optimization Chen et al. (2021). Due to their distinct problem setting it suffices to simply extend stochastic gradient descent (SGD), albeit under additional assumptions such as (Chen et al., 2021, Assumption 3). In our setting, however, SGD is not possible even when restricting ourselves to monotone problems. 3 Problem formulation and preliminaries We are interested in finding z ∈ n such that the following inclusion holds, 0 ∈ Tz := Az + Fz. (3.1) A wide range of machine learning applications can be cast as an inclusion. Most noticeable, a structured minimax problem can be reduced to (3.1) as shown in Section 8.1. We will rely on common notation and concepts from monotone operators (see Appendix B for precise definitions). Assumption I. In problem (3.1), (i) The operator F : n → n is LF-Lipschitz with LF ∈ [0,∞), i.e., ∥Fz − Fz′∥ ≤ LF∥z − z′∥ ∀z, z′ ∈ n. (3.2) (ii) The operator A : n ⇒ n is a maximally monotone operator. (iii) Weak Minty variational inequality (MVI) holds, i.e., there exists a nonempty set S⋆ ⊆ zer T such that for all z⋆ ∈ S⋆ and some ρ ∈ (− 12LF ,∞) ⟨v, z − z⋆⟩ ≥ ρ∥v∥2, for all (z, v) ∈ gph T. (3.3) Remark 1. In the unconstrained and smooth case (A ≡ 0), Assumption I(iii) reduces to ⟨Fz, z−z⋆⟩ ≥ ρ∥Fz∥2 for all z ∈ n. When ρ = 0 this condition reduces to the MVI (i.e. star-monotonicity), while negative ρ makes the problem increasingly nonmonotone. Interestingly, the inequality is not symmetric and one may instead consider that the assumption holds for −F. Through this observation, Appendix G extends the reach of the extragradient-type algorithms developed for weak MVIs. Stochastic oracle We assume that we cannot compute Fz easily, but instead we have access to the stochastic oracle F̂(z, ξ), which we assume is unbiased with bounded variance. We additionally assume that z 7→ F̂(z, ξ) is LF̂ Lipschitz continuous in mean and that it can be simultaneously queried under the same randomness. Assumption II. For the operator F̂(·, ξ) : n → n the following holds. (i) Two-point oracle: The stochastic oracle can be queried for any two points z, z′ ∈ n, F̂(z, ξ), F̂(z′, ξ) where ξ ∼ P. (3.4) (ii) Unbiased: Eξ [ F̂(z, ξ) ] = Fz ∀z ∈ n. (iii) Bounded variance: Eξ [ ∥F̂(z, ξ) − F̂(z)∥2 ] ≤ σ2F ∀z ∈ n. Assumption III. The operator F̂(·, ξ) : n → n is Lipschitz continuous in mean with LF̂ ∈ [0,∞): Eξ [ ∥F̂(z, ξ) − F̂(z′, ξ)∥2 ] ⩽ L2 F̂ ∥z − z′∥2 for all z, z′ ∈ n. (3.5) Remark 2. Assumptions II(i) and III are also common in the variance reduction literature (Fang et al., 2018; Nguyen et al., 2019; Alacaoglu & Malitsky, 2021), but in contrast with variance reduction we will not necessarily need knowledge of LF̂ to specify the algorithm, in which case the problem constant will only affect the complexity. Crucially, this decoupling of the stepsize from LF̂ will allow the proposed scheme to converge for a larger range of ρ in Assumption I(iii). Finally, note that Assumption II(i) commonly holds in machine learning applications, where usually the stochasticity is induced by the sampled mini-batch. 4 Method To arrive at a stochastic scheme for weak MVI we first need to understand the crucial ingredients in the deterministic setting. For simplicity we will initially consider the unconstrained and smooth Algorithm 1 (BC-SEG+) Stochastic algorithm for problem (3.1) when A ≡ 0 Require z−1 = z̄−1 = z0 ∈ n αk ∈ (0, 1), γ ∈ (⌊−2ρ⌋+, 1/LF) Repeat for k = 0, 1, . . . until convergence 1.1: Sample ξk ∼ P 1.2: z̄k = zk − γF̂(zk, ξk) + (1 − αk) ( z̄k−1 − zk−1 + γF̂(zk−1, ξk) ) 1.3: Sample ξ̄k ∼ P 1.4: zk+1 = zk − αkγF̂(z̄k, ξ̄k) Return zk+1 setting, i.e. A ≡ 0 in (3.1). The first component is taking the second stepsize α smaller as done in extragradient+ (EG+), z̄k = zk − γFzk zk+1 = zk − αγFz̄k (EG+) where α ∈ (0, 1). Convergence in weak MVI was first shown in Diakonikolas et al. (2021) and later tightened by Pethick et al. (2022), who characterized that smaller α allows for a larger range of the problem constant ρ. Taking α small is unproblematic for a stochastic scheme where usually the stepsize is taken diminishing regardless. However, Pethick et al. (2022) also showed that the extrapolation stepsize γ plays a critical role for convergence under weak MVI. Specifically, they proved that a larger stepsize γ leads to a looser bound on the problem class through ρ > −γ/2. While a lower bound has not been established we provide an example in Figure 3 of Appendix H where small stepsize prevents convergence. Unfortunately, picking γ large (e.g. as γ = 1/LF) causes significant complications in the stochastic case where both stepsizes are usually taken to be diminishing as in the following scheme, z̄k = zk − βkγF̂(zk, ξk) with ξk ∼ P zk+1 = zk − αkγF̂(z̄k, ξ̄k) with ξ̄k ∼ P (SEG) where αk = βk ∝ 1/k. Even with a two-timescale variant (when βk > αk) it has only been possible to show convergence for MVI (i.e. when ρ = 0) (Hsieh et al., 2020). Instead of decreasing both stepsizes, Pethick et al. (2022) proposes a scheme that keeps the first stepsize constant, z̄k = zk − γF̂(zk, ξk) with ξk ∼ P zk+1 = zk − αkγF̂(z̄k, ξ̄k) with ξ̄k ∼ P (SEG+) However, (SEG+) does not necessarily converge even in the monotone case as we illustrate in Figure 1. The non-convergence stems from the bias term introduced by the randomness of z̄k in F̂(z̄k, ξ̄k). Intuitively, the role of z̄k is to approximate the deterministic exploration step ˜̄zk := zk − γFzk. While z̄k is an unbiased estimate of ˜̄zk this does not imply that F̂(z̄k, ξ̄k) is an unbiased estimate of F(˜̄zk). Unbiasedness only holds in special cases, such as when F is linear and A ≡ 0 for which we show convergence of (SEG+) in Section 5 under weak MVI. In the monotone case it suffice to take the exploration stepsize γ diminishing (Hsieh et al., 2020, Thm. 1), but this runs counter to the fixed stepsize requirement of weak MVI. Instead we propose bias-corrected stochastic extragradient+ (BC-SEG+) in Algorithm 1. BC-SEG+ adds a bias correction term of the previous operator evaluation using the current randomness ξk. This crucially allows us to keep the first stepsize fixed. We further generalize this scheme to constrained and regularized setting with Algorithm 2 by introducing the use of the resolvent, (id + γA)−1. 5 Analysis of SEG+ In the special case where F is affine and A ≡ 0 we can show convergence of (SEG+) under weak MVI up to arbitrarily precision even with a large stepsize γ. Theorem 5.1. Suppose that Assumptions I and II hold. Assume Fz := Bz + v and choose αk ∈ (0, 1) and γ ∈ (0, 1/LF) such that ρ ≥ γ(αk − 1)/2. Consider the sequence (zk)k∈ generated by (SEG+). Then for all z⋆ ∈ S⋆, K∑ k=0 αk∑K j=0 α j E∥Fzk∥2 ≤ ∥z 0−z⋆∥2+γ2(γ2L2F+1)σ2F ∑K j=0 α 2 j γ2(1−γ2L2F ) ∑K j=0 α j . (5.1) The underlying reason for this positive results is that F̂(z̄k, ξ̄k) is unbiased when F is linear. This no longer holds when either linearity of F is dropped or when the resolvent is introduced for A . 0, in which case the scheme only converges to a γ-dependent neighborhood as illustrated in Figure 1. This is problematic in weak MVI where γ cannot be taken arbitrarily small (see Figure 3 of Appendix H). 6 Analysis for unconstrained and smooth case For simplicity we first consider the case where A ≡ 0. To mitigate the bias introduced in F(z̄k, ξ̄k) for (SEG+), we propose Algorithm 1 which modifies the exploration step. The algorithm can be seen as a particular instance of the more general scheme treated in Section 7. Theorem 6.1. Suppose that Assumptions I to III hold. Suppose in addition that γ ∈ (⌊−2ρ⌋+, 1/LF) and (αk)k∈ ⊂ (0, 1) is a diminishing sequence such that 2γLF̂ √ α0 + ( 1 + ( 1+γ2L2F 1−γ2L2F γ2L2F ) γ2L2 F̂ ) α0 ≤ 1 + 2ργ . (6.1) Then, the following estimate holds for all z⋆ ∈ S⋆ E[∥F(zk⋆ )∥2] ≤ (1 + ηγ2L2F)∥z0 − z⋆∥2 +Cσ2Fγ2 ∑K j=0 α 2 j µ ∑K j=0 α j (6.2) where C = 1 + 2η ( (γ2L2 F̂ + 1) + 2α0 ) , η = 12 1+γ2L2F 1−γ2L2F γ2L2F + 1 γLF̂ √ α0 , µ = γ2(1 − γ2L2F)/2 and k⋆ is chosen from {0, 1, . . . ,K} according to probability P[k⋆ = k] = αk∑K j=0 α j . Remark 6.2. As α0 → 0, the requirement (6.1) reduces to ρ > −γ/2 as in the deterministic setting of Pethick et al. (2022). Letting αk = α0/ √ k+r the rate becomes O(1/√k), thus matching the rate for the gap function of stochastic extragradient in the monotone case (see e.g. Juditsky et al. (2011)). The above result provides a rate for a random iterate as pioneered by Ghadimi & Lan (2013). Showing last iterate results even asymptotically is more challenging. Already in the monotone case, vanilla (SEG) (where βk = αk) only has convergence guarantees for the average iterate (Juditsky et al., 2011). In fact, the scheme can cycle even in simple examples (Hsieh et al., 2020, Fig. 1). Under the classical (but more restrictive) Robbins-Monro stepsize policy, it is possible to show almost sure convergence for the iterates generates by Algorithm 1. The following theorem demonstrates the result in the particular case of αk = 1/k+r. The more general statement is deferred to Appendix D. Theorem 6.3 (almost sure convergence). Suppose that Assumptions I to III hold. Suppose γ ∈ (⌊−2ρ⌋+, 1/LF), αk = 1k+r for any positive natural number r and (γLF̂ + 1)αk + 2 ( 1+γ2L2F 1−γ2L2F γ4L2F L 2 F̂ αk+1 + γLF̂ ) (αk+1 + 1)αk+1 ≤ 1 + 2ργ . (6.3) Algorithm 2 (BC-PSEG+) Stochastic algorithm for problem (3.1) Require z−1 = z0 ∈ n, h−1 ∈ n, αk ∈ (0, 1), γ ∈ (⌊−2ρ⌋+, 1/LF) Repeat for k = 0, 1, . . . until convergence 2.1: Sample ξk ∼ P 2.2: hk = ( zk − γF̂(zk, ξk) ) + (1 − αk) ( hk−1 − (zk−1 − γF̂(zk−1, ξk))) 2.3: z̄k = (id + γA)−1hk 2.4: Sample ξ̄k ∼ P 2.5: zk+1 = zk − αk ( hk − z̄k + γF̂(z̄k, ξ̄k) ) Return zk+1 Then, the sequence (zk)k∈ generated by Algorithm 1 converges almost surely to some z ⋆ ∈ zer T. Remark 6.4. As αk → 0 the condition on ρ reduces to ρ > −γ/2 like in the deterministic case. To make the results more accessible, both theorems have made particular choices of the free parameters from the proof, that ensures convergence for a given ρ and γ. However, since the parameters capture inherent tradeoffs, the choice above might not always provide the tightest rate. Thus, the more general statements of the theorems have been preserved in the appendix. 7 Analysis for constrained case The result for the unconstrained smooth case can be extended when the resolvent is available. Algorithm 2 provides a direct generalization of the unconstrained Algorithm 1. The construction relies on approximating the deterministic algorithm proposed in Pethick et al. (2022), which iteratively projects onto a half-space which is guaranteed to contain the solutions. By defining Hz = z − γFz, the scheme can concisely be written as, z̄k = (I + γA)−1(Hzk) zk+1 = zk − αk(Hzk − Hz̄k), (CEG+) for a particular adaptive choice of αk ∈ (0, 1). With a fair amount of hindsight we choose to replace Hzk with the bias-corrected estimate hk (as defined in Step 2.2 in Algorithm 2), such that the estimate is also reused in the second update. Theorem 7.1. Suppose that Assumptions I to III hold. Moreover, suppose that αk ∈ (0, 1), γ ∈ (⌊−2ρ⌋+, 1/LF) and the following holds, µ B 1−√α0 1+ √ α0 − α0(1 + 2γ2L2F̂η) + 2ρ γ > 0 (7.1) where η ≥ 1√ α0(1−γ2L2F ) + 1−√α0√ α0 . Consider the sequence (zk)k∈ generated by Algorithm 2. Then, the following estimate holds for all z⋆ ∈ S⋆ E[dist(0,T z̄k⋆ )2] ≤ E[∥z0 − z⋆∥2] + ηE[∥h−1 − Hz−1∥2] +Cγ2σ2F ∑K j=0 α 2 j γ2µ ∑K j=0 α j where C = 1 + 2η(1 + γ2L2 F̂ ) + 2α0η and k⋆ is chosen from {0, 1, . . . ,K} according to probability P[k⋆ = k] = αk∑K j=0 α j . Remark 3. The condition on ρ in (7.1) reduces to ρ > −γ/2 when α0 → 0 as in the deterministic case. As oppose to Theorem 6.3 which tracks ∥Fzk∥2, the convergence measure of Theorem 7.1 reduces to dist(0,T z̄k)2 = ∥Fz̄k∥2 when A ≡ 0. Since Algorithm 1 and Algorithm 2 coincide when A ≡ 0, Theorem 7.1 also applies to Algorithm 1 in the unconstrained case. Consequently, we obtain rates for both ∥Fz̄k∥2 and ∥Fzk∥2 in the unconstrained smooth case. 8 Asymmetric & nonlinear preconditioning In this section we show that the family of stochastic algorithms which converges under weak MVI can be expanded beyond Algorithm 2. This is achieved by extending (CEG+) through introducing Algorithm 3 Nonlinearly preconditioned primal dual extragradient (NP-PDEG) for solving (8.5) Require z−1 = z0 = (x0, y0) with x0, x−1, x̂−1, x̄−1 ∈ n, y0, y−1 ∈ r, θ ∈ [0,∞), Γ1 ≻ 0, Γ2 ≻ 0 Repeat for k = 0, 1, . . . until convergence 3.1: ξk ∼ P 3.2: x̂k = xk − Γ1∇xφ̂(zk, ξk) + (1 − αk) ( x̂k−1 − xk−1 + Γ1∇xφ̂(xk−1, yk−1, ξk) ) 3.3: x̄k = proxΓ −1 1 f ( x̂k ) 3.4: ξ′k ∼ P 3.5: ŷk = yk + Γ2 ( θ∇yφ̂(x̄k, yk, ξ′k) + (1 − θ)∇yφ̂(zk, ξk) ) 3.6: +(1 − αk) ( ŷk−1 − yk−1 − Γ2 ( θ∇yφ̂(x̄k−1, yk−1, ξ′k) + (1 − θ)∇yφ̂(zk−1, ξk) )) 3.7: ȳk = proxΓ −1 2 g ( ŷk ) 3.8: ξ̄k ∼ P 3.9: xk+1 = xk + αk ( x̄k − x̂k − Γ1∇xφ̂(z̄k, ξ̄k) ) 3.10: yk+1 = yk + αk ( ȳk − ŷk + Γ2∇yφ̂(z̄k, ξ̄k) ) Return zk+1 = (xk+1, yk+1) a nonlinear and asymmetrical preconditioning. Asymmetrical preconditioning has been used in the literature to unify a large range of algorithm in the monotone setting Latafat & Patrinos (2017). A subtle but crucial difference, however, is that the preconditioning considered here depends nonlinearly on the current iterate. As it will be shown in Section 8.1 this nontrivial feature is the key for showing convergence for primal-dual algorithms in the nonmonotone setting. Consider the following generalization of (CEG+) by introducing a potentially asymmetric nonlinear preconditioning Pzk that depends on the current iterate zk. find z̄k such that Hzk (zk) ∈ Pzk (z̄k) + A(z̄k), (8.1a) update zk+1 = zk + αΓ ( Hzk (z̄k) − Hzk (zk) ) . (8.1b) where Hu(v) B Pu(v) − F(v) and Γ is some positive definite matrix. The iteration independent and diagonal choice Pzk = γ−1I and Γ = γI correspond to the basic (CEG+). More generally we consider Pu(z) B Γ−1z + Qu(z) (8.2) where Qu(z) captures the nonlinear and asymmetric part, which ultimately enables alternating updates and relaxing the Lipschitz conditions (see Remark 8.1(ii)). Notice that the iterates above does not always yield well-defined updates and one must inevitably impose additional structures on the preconditioner (we provide sufficient condition in Appendix F.1). Consistently with (8.2), in the stochastic case we define P̂u(z, ξ) B Γ−1z + Q̂u(z, ξ). (8.3) The proposed stochastic scheme, which introduces a carefully chosen bias-correction term, is summarized as compute hk = P̂zk (zk, ξk) − F̂(zk, ξk) + (1 − αk) ( hk−1 − P̂zk−1 (zk−1, ξk) + F̂(zk−1, ξk) (8.4a) − Q̂zk−1 (z̄k−1, ξ′k−1) + Q̂zk−1 (z̄k−1, ξ′k) ) with ξk, ξ′k ∼ P find z̄k such that hk ∈ P̂zk (z̄k, ξ′k) + Az̄k (8.4b) update zk+1 = zk + αkΓ ( P̂zk (z̄k, ξ̄k) − F̂(z̄k, ξ̄k) − hk ) with ξ̄k ∼ P (8.4c) Remark 4. The two additional terms in (8.4a) are due to the interesting interplay between weak MVI and stochastic feedback, which forces a change of variables (see Appendix F.4). To make a concrete choice of Q̂u(z, ξ) we will consider a minimax problem as a motivating example (see Appendix F.1 for a more general setup). 8.1 Nonlinearly preconditioned primal dual hybrid gradient We consider the problem of minimize x∈ n maximize y∈ r f (x) + φ(x, y) − g(y). (8.5) where φ(x, y) := Eξ[φ̂(x, y, ξ)]. The first order optimality conditions may be written as the inclusion 0 ∈ Tz B Az + Fz, where A = (∂ f , ∂g), F(z) = (∇xφ(z),−∇yφ(z)), (8.6) while the algorithm only has access to the stochastic estimates F̂(z, ξ) B (∇xφ̂(z, ξ),−∇yφ̂(z, ξ)). Assumption IV. For problem (8.5), let the following hold with a stepsize matrix Γ = blkdiag(Γ1,Γ2) where Γ1 ∈ n and Γ2 ∈ r are symmetric positive definite matrices: (i) f , g are proper lsc convex (ii) φ : n+r → is continuously differentiable and for some symmetric positive definite matrices Dxx,Dxy,Dyx,Dyy, the following holds for all z = (x, y), z′ = (x′, y′) ∈ n+r ∥∇xφ(z′) − ∇xφ(z)∥2Γ1 ≤ L 2 xx∥x′ − x∥2Dxx + L 2 xy∥y′ − y∥2Dxy , ∥∇yφ(z′) − θ∇yφ(x′, y) − (1 − θ)∇yφ(z)∥2Γ2 ≤ L 2 yx∥x′ − x∥2Dyx + L 2 yy∥y′ − y∥2Dyy . (iii) Stepsize condition: L2xxDxx + L 2 yxDyx ≺ Γ−11 and L2xyDxy + L2yyDyy ≺ Γ−12 . (iv) Bounded variance: Eξ [ ∥F̂(z, ξ) − F̂(z′, ξ)∥2 Γ ] ≤ σ2F ∀z, z′ ∈ n. (v) φ̂(·, ξ) : n+r → is continuously differentiable and for some symmetric positive definite matrices Dx̂z,Dŷz,Dŷx,Dŷy, the following holds for all z = (x, y), z′ = (x′, y′) ∈ n+r and v, v′ ∈ n for θ ∈ [0,∞): Eξ [ ∥∇xφ̂(z′, ξ) − ∇xφ̂(z, ξ)∥2Γ1 ] ≤ L2x̂z∥z ′ − z∥2Dx̂z if θ , 1: Eξ [ ∥∇yφ̂(z, ξ) − ∇yφ̂(z′, ξ)∥2Γ2 ] ≤ L2ŷz∥z ′ − z∥2Dŷz if θ , 0: Eξ [ ∥∇yφ̂(v′, y′, ξ) − ∇yφ̂(v, y, ξ)∥2Γ2 ] ≤ L2ŷx∥v ′ − v∥2Dŷx + L 2 ŷy∥y ′ − y∥2Dŷy . Remark 8.1. In Algorithm 3 the choice of θ ∈ [0,∞) leads to different algorithmic oracles and underlying assumptions in terms of Lipschitz continuity in Assumptions IV(ii) and IV(v). (i) If θ = 0 then the first two steps may be computed in parallel and we recover Algorithm 2. Moreover, to ensure Assumption IV(ii) in this case it suffices to assume for Lx, Ly ∈ [0,∞), ∥∇xφ(z′) − ∇xφ(z)∥ ≤ Lx∥z′ − z∥, ∥∇yφ(z′) − ∇yφ(z)∥ ≤ Ly∥z′ − z∥. (ii) Taking θ = 1 leads to Gauss-Seidel updates and a nonlinear primal dual extragradient algorithm with sufficient Lipschitz continuity assumptions for some Lx, Ly ∈ [0,∞), ∥∇xφ(z′) − ∇xφ(z)∥ ≤ Lx∥z′ − z∥, ∥∇yφ(z′) − ∇yφ(x′, y)∥ ≤ Ly∥y′ − y∥. Algorithm 3 is an application of (8.4) applied for solving (8.6). In order to cast the algorithm as an instance of the template algorithm (8.4), we choose the positive definite stepsize matrix as Γ = blkdiag(Γ1,Γ2) with Γ1 ≻ 0, Γ2 ≻ 0, and the nonlinear part of the preconditioner as Q̂u(z̄, ξ) B ( 0,−θ∇yφ̂(x̄, y, ξ) ) , and Qu(z̄) B ( 0,−θ∇yφ(x̄, y) ) (8.7) where u = (x, y) and z̄ = (x̄, ȳ). Recall Hu(z) B Pu(z) − F(z) and define S u(z; z̄) B Hu(z) − Qu(z̄). The convergence in Theorem 8.2 depends on the distance between the initial estimate Γ−1ẑ−1 with ẑ−1 = (x̂−1, ŷ−1) and the deterministic S z−1 (z−1; z̄−1). See Appendix B for additional notation. Theorem 8.2. Suppose that Assumption I(iii) to II(ii) and IV hold. Moreover, suppose that αk ∈ (0, 1), θ ∈ [0,∞) and the following holds, µ B 1−√α0 1+ √ α0 + 2ρ γ̄ − α0 − 2α0(ĉ1 + 2ĉ2(1 + ĉ3))η > 0 and 1 − 4ĉ2α0 > 0 (8.8) where γ̄ denotes the smallest eigenvalue of Γ, η ≥ (1 + 4ĉ2α20)( 1√ α0(1−LM )2 + 1−√α0√ α0 )/(1 − 4ĉ2α0) and ĉ1 B L2x̂z∥ΓDx̂z∥ + 2(1 − θ) 2L2ŷz∥ΓDŷz∥ + 2θ 2L2ŷy∥Γ2Dŷy∥, ĉ2 B 2θ 2L2ŷx∥Γ1Dŷx∥, ĉ3 B L 2 x̂z∥ΓDx̂z∥, L2M B max { L2xx∥DxxΓ1∥ + L2yx∥DyxΓ1∥, ∥L2xy∥DxyΓ2∥ + L2yy∥DyyΓ2∥ } . Consider the sequence (zk)k∈ generated by Algorithm 3. Then, the following holds for all z ⋆ ∈ S⋆ E[distΓ(0,T z̄k⋆ )2] ≤ E[∥z0 − z⋆∥2 Γ−1 ] + ηE[∥Γ−1ẑ−1 − S z−1 (z−1; z̄−1)∥2Γ] +Cσ2F ∑K j=0 α 2 j µ ∑K j=0 α j where C B 2(η+α0( 1√α0(1−LM )2 + 1−√α0√ α0 ))(1+ 2ĉ2)+ 1+ 2(ĉ1 + 2ĉ2(Θ+ ĉ3))η with Θ = (1− θ)2 + 2θ2 and k⋆ is chosen from {0, 1, . . . ,K} according to probability P[k⋆ = k] = αk∑K j=0 α j . Remark 5. When α0 → 0 the conditions in (8.2) reduces to 1 + 2ργ̄ > 0 as in the deterministic case. For θ = 0 Algorithm 3 reduces to Algorithm 2. With this choice Theorem 8.2 simplifies, since the constant ĉ2 = 0, and we recover the convergence result of Theorem 7.1. 9 Experiments We compare BC-SEG+ and BC-PSEG+ against (EG+) using stochastic feedback (which we refer to as (SF-EG+)) and (SEG) in both an unconstrained setting and a constrained setting introduced in Pethick et al. (2022). See Appendix H.2 for the precise formulation of the projected variants which we denote (SF-PEG+) and (PSEG) respectively. In the unconstrained example we control all problem constant and set ρ = −1/10LF , while the constrained example is a specific minimax problem where ρ > −1/2LF holds within the constrained set for a Lipschitz constant LF restricted to the same constrained set. To simulate a stochastic setting in both examples, we consider additive Gaussian noise, i.e. F̂(z, ξ) = Fz + ξ where ξ ∼ N(0, σ2I). In the experiments we choose σ = 0.1 and αk ∝ 1/k, which ensures almost sure convergence of BC-(P)SEG+. For a more aggressive stepsize choice αk ∝ 1/ √ k see Figure 4. Further details can be found in Appendix H. The results are shown in Figure 2. The sequence generated by (SEG) and (PSEG) diverges for the unconstrained problem and cycles in the constrained problem respectively. In comparison (SF-EG+) and (SF-PEG+) gets within a neighborhood of the solutions but fails to converge due to the nondiminishing stepsize, while BC-SEG+ and BC-PSEG+ converges in the examples. 10 Conclusion This paper shows that nonconvex-nonconcave problems characterize by the weak Minty variational inequality can be solved efficiently even when only stochastic gradients are available. The approach crucially avoids increasing batch sizes by instead introducing a bias-correction term. We show that convergence is possible for the same range of problem constant ρ ∈ (−γ/2,∞) as in the deterministic case. Rates are established for a random iterate, which matches those of stochastic extragradient in the monotone case, and the result is complemented with almost sure convergence, thus providing asymptotic convergence for the last iterate. We show that the idea extends to a family of extragradient-type methods which includes a nonlinear extension of the celebrated primal dual hybrid gradient (PDHG) algorithm. For future work it is interesting to see if the rate can be improved by considering accelerated methods such as Halpern iterations. 11 Acknowledgments and disclosure of funding This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n° 725594 - time-data). This work was supported by the Swiss National Science Foundation (SNSF) under grant number 200021_205011. The work of the third and fourth author was supported by the Research Foundation Flanders (FWO) postdoctoral grant 12Y7622N and research projects G081222N, G033822N, G0A0920N; Research Council KU Leuven C1 project No. C14/18/068; European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 953348. The work of Olivier Fercoq was supported by the Agence National de la Recherche grant ANR-20-CE40-0027, Optimal Primal-Dual Algorithms (APDO). References Ahmet Alacaoglu and Yura Malitsky. Stochastic variance reduction for variational inequality methods. arXiv preprint arXiv:2102.08352, 2021. Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Nathan Srebro, and Blake Woodworth. Lower bounds for non-convex stochastic optimization. arXiv preprint arXiv:1912.02365, 2019. Heinz H. Bauschke and Patrick L. Combettes. Convex analysis and monotone operator theory in Hilbert spaces. CMS Books in Mathematics. Springer, 2017. ISBN 978-3-319-48310-8. Heinz H Bauschke, Walaa M Moursi, and Xianfu Wang. Generalized monotone operators and their averaged resolvents. Mathematical Programming, 189(1):55–74, 2021. Dimitri P. Bertsekas. Incremental proximal methods for large scale convex optimization. Mathematical programming, 129(2):163–195, 2011. Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, and Nicolas Loizou. Stochastic gradient descent-ascent: Unified theory and new efficient methods. arXiv preprint arXiv:2202.07262, 2022. Axel Böhm. Solving nonconvex-nonconcave min-max problems exhibiting weak minty solutions. arXiv preprint arXiv:2201.12247, 2022. Radu Ioan Boţ, Panayotis Mertikopoulos, Mathias Staudigl, and Phan Tu Vuong. Minibatch forwardbackward-forward methods for solving stochastic variational inequalities. Stochastic Systems, 11 (2):112–139, 2021. Xufeng Cai, Chaobing Song, Cristóbal Guzmán, and Jelena Diakonikolas. A stochastic Halpern iteration with variance reduction for stochastic monotone inclusion problems. arXiv preprint arXiv:2203.09436, 2022. A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011. Tianyi Chen, Yuejiao Sun, and Wotao Yin. Solving stochastic compositional optimization is nearly as easy as solving stochastic optimization. IEEE Transactions on Signal Processing, 69:4937– 4948, 2021. Patrick L Combettes and Teemu Pennanen. Proximal methods for cohypomonotone operators. SIAM journal on control and optimization, 43(2):731–742, 2004. Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis. The complexity of constrained min-max optimization. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 1466–1478, 2021. Jelena Diakonikolas, Constantinos Daskalakis, and Michael Jordan. Efficient methods for structured nonconvex-nonconcave min-max optimization. In International Conference on Artificial Intelligence and Statistics, pp. 2746–2754. PMLR, 2021. Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. Advances in Neural Information Processing Systems, 31, 2018. Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013. Eduard Gorbunov, Hugo Berard, Gauthier Gidel, and Nicolas Loizou. Stochastic extragradient: General analysis and improved rates. In International Conference on Artificial Intelligence and Statistics, pp. 7865–7901. PMLR, 2022. M Hirsch and S Vavasis. Exponential lower bounds for finding Brouwer fixed points. In Proceedings of the 28th Symposium on Foundations of Computer Science, pp. 401–410, 1987. Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. On the convergence of single-call stochastic extra-gradient methods. Advances in Neural Information Processing Systems, 32, 2019. Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. Explore aggressively, update conservatively: Stochastic extragradient methods with variable stepsize scaling. arXiv preprint arXiv:2003.10162, 2020. Anatoli Juditsky, Arkadi Nemirovski, and Claire Tauvel. Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Systems, 1(1):17–58, 2011. Aswin Kannan and Uday V Shanbhag. Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants. Computational Optimization and Applications, 74(3):779–820, 2019. Puya Latafat and Panagiotis Patrinos. Asymmetric forward–backward–adjoint splitting for solving monotone inclusions involving three operators. Computational Optimization and Applications, 68(1):57–93, Sep 2017. Sucheol Lee and Donghwan Kim. Fast extra gradient methods for smooth structured nonconvexnonconcave minimax problems. arXiv preprint arXiv:2106.02326, 2021. Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, and Michael Jordan. On the convergence of stochastic extragradient for bilinear games using restarted iteration averaging. In International Conference on Artificial Intelligence and Statistics, pp. 9793–9826. PMLR, 2022. Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, and Ioannis Mitliagkas. Stochastic hamiltonian gradient methods for smooth games. In International Conference on Machine Learning, pp. 6370–6381. PMLR, 2020. Nicolas Loizou, Hugo Berard, Gauthier Gidel, Ioannis Mitliagkas, and Simon Lacoste-Julien. Stochastic gradient descent-ascent and consensus optimization for smooth games: Convergence analysis under expected co-coercivity. Advances in Neural Information Processing Systems, 34: 19095–19108, 2021. Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, and Yura Malitsky. Revisiting stochastic extragradient. In International Conference on Artificial Intelligence and Statistics, pp. 4573–4582. PMLR, 2020. Lam M Nguyen, Marten van Dijk, Dzung T Phan, Phuong Ha Nguyen, Tsui-Wei Weng, and Jayant R Kalagnanam. Finite-sum smooth optimization with SARAH. arXiv preprint arXiv:1901.07648, 2019. Thomas Pethick, Puya Latafat, Panagiotis Patrinos, Olivier Fercoq, and Volkan Cevher. Escaping limit cycles: Global convergence for constrained nonconvex-nonconcave minimax problems. In International Conference on Learning Representations, 2022. Ralph Tyrell Rockafellar. Convex analysis. Princeton University Press, 1970. Chaobing Song, Zhengyuan Zhou, Yichao Zhou, Yong Jiang, and Yi Ma. Optimistic dual extrapolation for coherent non-monotone variational inequalities. Advances in Neural Information Processing Systems, 33:14303–14314, 2020. P. Tseng. A modified forward-backward splitting method for maximal monotone mappings. SIAM Journal on Control and Optimization, 38(2):431–446, 2000. Junchi Yang, Negar Kiyavash, and Niao He. Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems. arXiv preprint arXiv:2002.09621, 2020. Appendix Table of Contents A Prelude 14 B Preliminaries 14 C Proof for SEG+ 15 Proof of Theorem 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 D Proof for smooth unconstrained case 16 Proof of Theorem D.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Proof of Theorem 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Proof of Theorem D.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Proof of Theorem 6.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 E Proof for constrained case 21 Proof of Theorem E.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Proof of Theorem 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 F Proof for NP-PDEG through a nonlinear asymmetric preconditioner 23 F.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 F.2 Deterministic lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 F.3 Stochastic results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Proof of Theorem F.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Proof of Theorem 8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 F.4 Explanation of bias-correction term . . . . . . . . . . . . . . . . . . . . . . . . . 30 G Negative weak Minty variational inequality 31 H Experiments 32 H.1 Synthetic example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 H.2 Additional algorithmic details . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 I Comparison with variance reduction 34 A Prelude For the unconstrained and smooth setting Appendix C treats convergences of (SEG+) for the restricted case where F is linear. Appendix D shows both random iterate results and almost sure convergence of Algorithm 1. Theorems 6.1 and 6.3 in the main body are implied by the more general results in this section, which preserves certain free parameters and more general stepsize requirements. Appendices E and F moves beyond the unconstrained and smooth case by showing convergence for instances of the template scheme (8.1). The analysis of Algorithm 3 in Appendix F applies to Algorithm 2, but for completeness we establish convergence for general F separately in Appendix E. The relationship between the theorems are presented in Table 1. B Preliminaries Given a psd matrix V we define the inner product as ⟨·, ·⟩V B ⟨·,V ·⟩ and the corresponding norm ∥ · ∥ B √ ⟨·, ·⟩V . The distance from u ∈ n to a setU ⊆ n with respect to a positive definite matrix V is defined as distV (u,U) B minu′∈U ∥u − u′∥V , which we simply denote dist(u,U) when V = I. The norm ∥X∥ refers to spectral norm when X is a matrix. We summarize essential definitions from operator theory, but otherwise refer to Bauschke & Combettes (2017); Rockafellar (1970) for further details. An operator A : n ⇒ d maps each point x ∈ n to a subset Ax ⊆ d, where the notation A(x) and Ax will be used interchangably. We denote the domain of A by dom A B {x ∈ n | Ax , ∅}, its graph by gph A B {(x, y) ∈ n × d | y ∈ Ax}. The inverse of A is defined through its graph, gph A−1 B {(y, x) | (x, y) ∈ gph A} and the set of its zeros by zer A B {x ∈ n | 0 ∈ Ax}. Definition B.1 ((co)monotonicity Bauschke et al. (2021)). An operator A : n ⇒ n is said to be ρ-monotone for some ρ ∈ , if for all (x, y), (x′, y′) ∈ gph A ⟨y − y′, x − x′⟩ ≥ ρ∥x − x′∥2, and it is said to be ρ-comonotone if for all (x, y), (x′, y′) ∈ gph A ⟨y − y′, x − x′⟩ ≥ ρ∥y − y′∥2. The operator A is said to be maximally (co)monotone if there exists no other (co)monotone operator B for which gph A ⊂ gph B properly. If A is 0-monotone we simply say it is monotone. When ρ < 0, ρ-comonotonicity is also referred to as |ρ|-cohypomonotonicity. Definition B.2 (Lipschitz continuity and cocoercivity). Let D ⊆ n be a nonempty subset of n. A single-valued operator A : D → n is said to be L-Lipschitz continuous if for any x, x′ ∈ D ∥Ax − Ax′∥ ≤ L∥x − x′∥, and β-cocoercive if ⟨x − x′, Ax − Ax′⟩ ≥ β∥Ax − Ax′∥2. Moreover, A is said to be nonexpansive if it is 1-Lipschitz continuous, and firmly nonexpansive if it is 1-cocoercive. A β-cocoercive operator is also β−1-Lipschitz continuity by direct implication of Cauchy-Schwarz. The resolvent operator JA = (id + A)−1 is firmly nonexpansive (with dom JA = n) if and only if A is (maximally) monotone. We will make heavy use of the Fenchel-Young inequality. For all a, b ∈ n and e > 0 we have, 2⟨a, b⟩ ≤ e∥a∥2 + 1e ∥b∥ 2 (B.1) ∥a + b∥2 ≤ (1 + e)∥a∥2 + (1 + 1e )∥b∥ 2 (B.2) −∥a − b∥2 ≤ − 11+e ∥a∥ 2 + 1e ∥b∥ 2 (B.3) C Proof for SEG+ Proof of Theorem 5.1. Following (Hsieh et al., 2020) closely, define the reference state ˜̄zk := zk − γFzk to be the exploration step using the deterministic operator and denote the second stepsize as ηk := αkγ. We will let ζ denote the additive noise term, i.e. F̂(z, ξ) := F(z) + ζ. Expanding the distance to solution, ∥zk+1 − z⋆∥2 = ∥zk − ηkF̂(z̄k, ξ̄k) − z⋆∥2 = ∥zk − z⋆∥2 − 2ηk⟨F̂(z̄k, ξ̄k), zk − z⋆⟩ + η2k∥F̂(z̄k, ξ̄k)∥2 = ∥zk − z⋆∥2 − 2ηk⟨F̂(z̄k, ξ̄k), ˜̄zk − z⋆⟩ − 2γηk⟨F̂(z̄k, ξ̄k), F(zk)⟩ + η2k∥F̂(z̄k, ξ̄k)∥2. (C.1) Recall that the operator is assumed to be linear Fz = Bz + v in which case we have, F̂(z̄k, ξ̄k) = Bz̄k + v + ζ̄k =B(zk − γF̂(zk, ξk)) + v + ζ̄k =B(zk − γBzk − γv − γζk) + v + ζ̄k =B(zk − γ(Bzk + v)) + v − γBζk + ζ̄k =F(˜̄zk) − γBζk + ζ̄k. (C.2) The two latter terms are zero in expectation due to the unbiasedness from Assumption II(ii), which lets us write the terms on the RHS of (C.1) as, −Ek⟨F̂(z̄k, ξ̄k), ˜̄zk − z⋆⟩ = −⟨F(˜̄zk), ˜̄zk − z⋆⟩ (C.3) −Ek⟨F̂(z̄k, ξ̄k), F(zk)⟩ = −⟨F(˜̄zk), F(zk)⟩ (C.4) Ek∥F̂(z̄k, ξ̄k)∥2 = ∥F(˜̄zk)∥2 + Ek∥γBζk∥2 + Ek∥ζ̄k∥2. (C.5) We can bound (C.3) directly through the weak MVI in Assumption I(iii) which might still be positive, −⟨F(˜̄zk), ˜̄zk − z⋆⟩ ≤ −ρ∥F(˜̄zk)∥2. (C.6) For the latter two terms of (C.5) we have Ek∥γBζk∥2 + Ek∥ζ̄k∥2 = γ2Ek∥F(ζk) − F(0)∥2 + Ek∥ζ̄k∥2 ≤ (γ2L2F + 1)σ2F , (C.7) where the last inequality follows from Lipschitz in Assumption I(i) and bounded variance in Assumption II(iii). Combining everything into (C.1) we are left with Ek∥zk+1 − z⋆∥2 ≤ ∥zk − z⋆∥2 + η2k(γ2L2F + 1)σ2F − 2γηk⟨F(˜̄zk), F(zk)⟩ + (η2k − 2ηkρ)∥F(˜̄zk)∥2 (C.8) By assuming the stepsize condition, ρ ≥ (ηk − γ)/2, we have η2k − 2ηkρ ≤ γηk. This allows us to complete the square, −2γηk⟨F(˜̄zk), F(zk)⟩ + (η2k − 2ηkρ)∥F(˜̄zk)∥2 ≤ −2γηk⟨F(˜̄zk), F(zk)⟩ + γηk∥F(˜̄zk)∥2 = γηk(∥F(zk) − F(˜̄zk)∥2 − ∥F(zk)∥2) ≤ γηk(γ2L2F − 1)∥F(zk)∥2, (C.9) where the last inequality follows from Lipschitzness of F and the definition of the update rule. Plugging into (C.8) we are left with Ek∥zk+1 − z⋆∥2 ≤ ∥zk − z⋆∥2 + η2k(γ2L2F + 1)σ2F − γηk(1 − γ2L2F)∥F(zk)∥2. (C.10) The result is obtained by total expectation and summing. D Proof for smooth unconstrained case Lemma D.1. Consider the recurrent relation Bk+1 = ξkBk + dk such that ξk > 0 for all k ≥ 0. Then Bk+1 = ( Πkp=0ξp )B0 + k∑ ℓ=0 dℓ Πℓp=0ξp . Assumption V. γ ∈ (⌊−2ρ⌋+, 1/LF) and for positive real valued b, µ B γ2(1 − γ2L2F(1 + b−1)) > 0. (D.1) Theorem D.2. Suppose that Assumptions I to III hold. Suppose in addition that Assumption V holds and that (αk)k∈ ⊂ (0, 1) is a diminishing sequence such that 2γLF̂ √ α0 + ( 1 + ( (b + 1)γ2L2F ) γ2L2 F̂ ) α0 ≤ 1 + 2ργ . (D.2) Consider the sequence (zk)k∈ generated by Algorithm 1. Then, the following estimate holds K∑ k=0 αk∑K j=0 α j E[∥F(zk)∥2] ≤ ∥z0 − z⋆∥2 + ηγ2∥F(z0)∥2 +Cσ2Fγ2 ∑K j=0 α 2 j µ ∑K j=0 α j , (D.3) where C = 1 + 2η ( (γ2L2 F̂ + 1) + 2α0 ) and η = 12 (b + 1)γ 2L2F + 1 γLF̂ √ α0 . Proof of Theorem D.2. The proof relies on establishing a (stochastic) descent property on the following potential function Uk+1 B ∥zk+1 − z⋆∥2 + Ak+1∥uk∥2 + Bk+1∥zk+1 − zk∥2. where uk B z̄k− zk+γF(zk) measures the difference of the bias-corrected step from the deterministic exploration step, and (Ak)k∈ , (Bk)k∈ are positive scalar parameters to be identified. We proceed to consider each term individually. Let us begin by quantifying how well z̄k estimates zk − γF(zk). uk = z̄k − zk + γF(zk) = γF(zk) − γF̂(zk, ξk) + (1 − αk)(z̄k−1 − zk−1 + γF̂(zk−1, ξk)). Therefore, ∥uk∥2 = ∥γF(zk) − γF̂(zk, ξk) + (1 − αk)(γF̂(zk−1, ξk) − γF(zk−1))∥2 + (1 − αk)2∥uk−1∥2 + 2(1 − αk)⟨z̄k−1 − zk−1 + γF(zk−1), γF(zk) − γF̂(zk, ξk) + (1 − αk)(γF̂(zk−1, ξk) − γF(zk−1))⟩. Conditioned on Fk, in the inner product the left term is known and the right term has an expectation that equals zero. Therefore, we obtain E[∥uk∥2 |Fk]=E[∥(1−αk) ( γF(zk)−γF̂(zk,ξk)+γF̂(zk−1,ξk)−γF(zk−1) ) +αk ( γF(zk)−γF̂(zk,ξk) ) ∥2 |Fk] +(1−αk)2∥uk−1∥2 ≤(1−αk)2∥uk−1∥2+2(1−αk)2γ2E[∥F̂(zk,ξk)−F̂(zk−1,ξk)∥2 |Fk] +2α2kγ 2E[∥F(zk)−F̂(zk,ξk)∥2 |Fk] ≤(1−αk)2∥uk−1∥2+2(1−αk)2γ2L2F̂∥z k−zk−1∥2+2α2kγ2σ2F (D.4) where in the first inequality we used Young inequality and the fact that the second moment is larger than the variance, and Assumptions II(iii) and III were used in the second inequality. By step 1.4, the equality ∥zk+1 − z⋆∥2 = ∥zk − z⋆∥2 − 2αkγ⟨F̂(z̄k, ξ̄k), zk − z⋆⟩ + α2kγ2∥F̂(z̄k, ξ̄k)∥2, (D.5) holds. The inner product in (D.5) can be upper bounded using Young inequalities with positive parameters εk, k ≥ 0, and b as follows. E[⟨−γF̂(z̄k, ξ̄k), zk − z⋆⟩ | F̄k] = − γ⟨F(z̄k), zk − z̄k⟩ − γ⟨F(z̄k), z̄k − z⋆⟩ = − γ2⟨F(z̄k), F(zk)⟩ + γ⟨F(z̄k), z̄k − zk + γF(zk)⟩ − γ⟨F(z̄k), z̄k − z⋆⟩ ≤ γ2 (1 2 ∥F(z̄k) − F(zk)∥2 − 1 2 ∥F(z̄k)∥2 − 1 2 ∥F(zk)∥2 ) + γ2εk 2 ∥F(z̄k)∥2 + 1 2εk ∥z̄k − zk + γF(zk)∥2 − γρ∥F(z̄k)∥2 ≤ γ2L2F 1 + b 2 ∥uk∥2 + 1 + b −1 2 γ4L2F∥F(zk)∥2 − γ2 2 ∥F(z̄k)∥2 − γ 2 2 ∥F(zk)∥2 + γ 2εk 2 ∥F(z̄k)∥2 + 1 2εk ∥uk∥2 − γρ∥F(z̄k)∥2 = ( γ2L2F 1 + b 2 + 1 2εk )∥uk∥2 + γ2(γ2L2F(1 + b−1) − 1) 2 ∥F(zk)∥2 + (γ2(εk − 1) 2 − γρ)∥F(z̄k)∥2. (D.6) Conditioning (D.6) with E [· | Fk] = E[E[· | F̄k] | Fk], since Fk ⊂ F̄k, yields 2E[⟨−γF̂(z̄k, ξ̄k), zk − z⋆⟩ | Fk] ≤ ( γ2L2F(1 + b) + 1 εk ) E[∥uk∥2 | Fk] − µ∥F(zk)∥2 + ( γ2(εk − 1) − 2γρ ) E [ ∥F(z̄k)∥2 | Fk ] , (D.7) where µ was defined in (D.1). The condition expectation of the third term in (D.5) is bounded through Assumption II(iii) by E [ ∥F̂(z̄k, ξ̄k)∥2 | Fk ] = E [ E[∥F̂(z̄k, ξ̄k)∥2 | F̄k] | Fk ] ≤ ∥F(z̄k)∥2 + σ2F , which in turn implies E [ ∥zk+1 − zk∥2 | Fk ] = α2kγ 2E [ ∥F̂(z̄k, ξ̄k)∥2 | Fk ] ≤ α2kγ2E [ ∥Fz̄k∥2 | Fk ] + α2kγ 2σ2F (D.8) Combining (D.7), (D.8), and (D.5) yields E[∥zk+1 − z⋆∥2 + Ak+1∥uk∥2 + Bk+1∥zk+1 − zk∥2 | Fk] ≤ ∥zk − z⋆∥2 + ( Ak+1 + αk ( γ2L2F(1 + b) + 1 εk )) E[∥uk∥2 | Fk] − αkµ∥F(zk)∥2 + ( αk ( γ2(εk − 1) − 2γρ ) + α2kγ 2 ) E [ ∥F(z̄k)∥2 | Fk ] + α2kγ 2σ2F + Bk+1α2kγ 2E [ ∥Fz̄k∥2 | Fk ] + Bk+1α2kγ 2σ2F . (D.9) Further using (D.4) and denoting Xk1 B αk ( γ2L2F(1 + b) + 1 εk ) + Ak+1, Xk2 B αk ( γ2(εk − 1) − 2ργ + αk γ2 ) leads to E[Uk+1 | Fk] −Uk ≤ − αkµ∥F(zk)∥2 + ( Xk1(1 − αk)2 − Ak ) ∥uk−1∥2 + ( 2Xk1(1 − αk)2γ2L2F̂ − Bk ) ∥zk − zk−1∥2 + ( Xk2 + Bk+1α 2 kγ 2 ) E [ ∥F(z̄k)∥2 | Fk ] + ( Bk+1α2k + α 2 k + 2X k 1α 2 k ) γ2σ2F . (D.10) Having established (D.10), set Ak = A, Bk = 2Aγ2L2F̂ , and εk = ε to obtain by the law of total expectation that E[Uk+1] − E[Uk] ≤ − αkµE [ ∥F(zk)∥2 ] + ( Xk1(1 − αk)2 − A ) E [ ∥uk−1∥2 ] + 2γ2L2 F̂ ( Xk1(1 − αk)2 − A ) E [ ∥zk − zk−1∥2 ] + ( Xk2 + 2Aγ 4L2 F̂ α2k ) E [ ∥F(z̄k)∥2 ] + ( 2Aγ2L2 F̂ + 1 + 2Xk1 ) α2kγ 2σ2F . (D.11) To get a recursion we require Xk1(1 − αk)2 − A ≤ 0 and Xk2 + 2Aγ4L2F̂α 2 k ≤ 0. (D.12) By developing the first requirement of (D.12) we have, 0 ≥ Xk1(1 − αk)2 − A = αk(1 − αk)2 ( γ2L2F(1 + b) + 1 ε ) + αk(αk − 2)A. (D.13) Equivalently, A needs to satisfy A ≥ (1 − αk) 2 2 − αk ( γ2L2F(1 + b) + 1 ε ) . (D.14) for any αk ∈ (0, 1). Since (1−αk) 2 2−αk ≤ 1 2 given αk ∈ (0, 1) it suffice to pick A = 12 ( (b + 1)γ2L2F + 1 ε ) . (D.15) For the second requirement of (D.12) note that we can equivalently require that the following quantity is negative 1 αkγ2 ( Xk2 + 2Aγ 4L2 F̂ α2k ) = ε − 1 − 2ρ γ + αk + 2Aγ2L2F̂αk ≤ ε − 1 − 2ρ γ + ( 1 + ( (b + 1)γ2L2F + 1 ε ) γ2L2 F̂ ) α0 where we have used that αk ≤ α0 and the choice of A from (D.15). Setting the Young parameter ε = γLF̂ √ α0 we obtain that Xk2 + 2Aγ 4L2 F̂ α2k ≤ 0 owing to (D.2). On the other hand, the last term in (D.11) may be upper bounded by 2Aγ2L2 F̂ + 1 + 2Xk1 = 1 + ( (b + 1)γ2L2F + 1 γLF̂ √ α0 )( (γ2L2 F̂ + 1) + 2αk ) ≤ 1 + ( (b + 1)γ2L2F + 1 γLF̂ √ α0 )( (γ2L2 F̂ + 1) + 2α0 ) = C. Thus, it follows from (D.11) that E[Uk+1] − E[Uk] ≤ − αkµE [ ∥F(zk)∥2 ] +Cα2kγ 2σ2F . Telescoping the above inequality completes the proof. Proof of Theorem 6.1. The theorem is obtained as a particular instantiation of Theorem D.2. The condition in (D.1) can be rewritten as b > γ 2L2F 1−γ2L2F . A reasonable choice is b = 2γ 2L2F 1−γ2L2F . Substituting back into µ we obtain µ = γ2(1 − γ2L2F(1 + 1−γ2L2F 2γ2L2F )) = γ 2(1−γ2L2F ) 2 > 0. (D.16) Similarly, the choice of b is substituted into η and (D.2) of Theorem D.2. The rate in (D.2) is further simplified by applying Lipschitz continuity of F from Assumption I(i) to ∥Fz0∥2 = ∥Fz0 − Fz⋆∥2. The proof is complete by observing that the guarantee on the weighted sum can be converted into an expectation over a sampled iterate in the style of Ghadimi & Lan (2013). Assumption VI (almost sure convergence). Let d ∈ [0, 1], b > 0. Suppose that the following holds (i) the diminishing sequence (αk
1. What is the focus of the paper regarding solving inclusion problems? 2. What are the strengths and weaknesses of the proposed algorithms BCSEG+ and NP-PDGG? 3. Are there any concerns regarding the presentation and notation used in the paper? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What are some missing references on other structured non-monotone problems that should be included in the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper, proposes new variants of stochastic extra-gradient methods for solving inclusion problems that satisfy the minty variational inequality (MVI). The proposed algorithms BCSEG+ (Alg. 1 and Alg. 2) and NP-PDGG (Alg. 3) have been designed and analyzed for solving the inclusion problem under the unconstrained smooth case, constrained case, and min-max problem (8.5) respectively. The most important contribution of this work is that by using the new algorithms it is possible to provide analysis without requiring increasing batch sizes as the algorithm progress. Strengths And Weaknesses The paper is well-written and the idea is easy to follow. The authors did a great job on the separation of sections and on the presentation of the results. In particular, I find that it is very helpful for the reader that the authors included separate sections for the unconstrained and constrained setting. However, I believe the paper has some issues in terms of notation and numerical evaluation. In addition, the paper missing some relevant recent works on other classes of structured non-monotone problems. Let me provide some details below: On presentation: There is no part in the paper where the sets z e r T , and g p h T are defined. Even if the definition is trivial is currently missing. In addition, what do we call a maximally monotone operator (assumption on operator A in the main problem)? this detail is required for a self-contained paper. Inequality 3.2 used Γ and Γ − 1 without a proper explanation of why the Γ − 1 is needed. The standard L f -Lipschitz used identity matrices, so further details are needed. The same holds for Assumption III. The paper mentions in several parts that: ``employing diminishing stepsizes is no longer possible in the weak MVI setting." but they do not properly explain why. Why is this true? is there an existing paper that proves that or is it a speculation of the authors? more details are needed. Minor: The deterministic operator is defined as F z while the stochastic estimator is denoted with F ( z , ξ ) . It might have been better if one used F(z) as well for the deterministic variant. After the definition of SEG, the paper mentions: ``Even with a two-timescale variant (when β k > α k ) it has only been possible to show convergence for MVI (Hsieh et al., 2020)." what this means exactly? note that (Hsieh et al., 2020) has nothing to do with MVI. On proofs: What is b in equation C.1 in the appendix? I find that the steps in the proofs require further explanation for the reader to be able to follow easily (probably by splitting steps into several parts). The parts where Young's inequality is used are not always very clear. In C.8 the previous bound is used by now it has expectation conditional on f k even if the quantity is deterministic. This is not wrong but it is not standard. On experiments: (This is probably an important issue of the current version of the paper) The authors mentioned the following: "Except for BCSEG+, all methods fail to converge in these examples." and "In comparison (EG+) gets closer to a solution in both problems but fails to converge due to the non-diminishing stepsize, while BCSEG+ converges for both example." In my viewpoint figure, 2 does not show any benefit of BCSEG+, compared to EG+. both methods converge to a very similar neighborhood ( 10 − 2 ) of the solution. Probably the methods should be run for more iterations to obtain something useful. Also, I suspect that the method here is SEG+ and not EG+, right? Missing references on other structured non-monotone problems: [1] Yang, J., Kiyavash, N., and He, N. (2020). Global convergence and variance reduction for a class of nonconvexnonconcave minimax problems. NeurIPS [2] Song, C., Zhou, Z., Zhou, Y., Jiang, Y., and Ma, Y. (2020). Optimistic dual extrapolation for coherent nonmonotone variational inequalities. NeurIPS [3] Loizou, N., Berard, H., Gidel, G., Mitliagkas, I., and Lacoste-Julien, S. (2021). Stochastic gradient descentascent and consensus optimization for smooth games: Convergence analysis under expected co-coercivity.NeurIPS [4] Loizou, N., Berard, H., Jolicoeur-Martineau, A., Vincent, P., Lacoste-Julien, S., and Mitliagkas, I. (2020). Stochastic hamiltonian gradient methods for smooth games. ICML [5] Kannan, A. and Shanbhag, U. V. (2019). Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants. Computational Optimization and Applications, 74(3):779–820. [6] Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, and Nicolas Loizou. Stochastic gradient descent-ascent: Unified theory and new efficient methods. arXiv preprint arXiv:2202.07262, 2022 Clarity, Quality, Novelty And Reproducibility Please see the above review for further details.
ICLR
Title Solving stochastic weak Minty variational inequalities without increasing batch size Abstract This paper introduces a family of stochastic extragradient-type algorithms for a class of nonconvex-nonconcave problems characterized by the weak Minty variational inequality (MVI). Unlike existing results on extragradient methods in the monotone setting, employing diminishing stepsizes is no longer possible in the weak MVI setting. This has led to approaches such as increasing batch sizes per iteration which can however be prohibitively expensive. In contrast, our proposed methods involves two stepsizes and only requires one additional oracle evaluation per iteration. We show that it is possible to keep one fixed stepsize while it is only the second stepsize that is taken to be diminishing, making it interesting even in the monotone setting. Almost sure convergence is established and we provide a unified analysis for this family of schemes which contains a nonlinear generalization of the celebrated primal dual hybrid gradient algorithm. N/A 1 Introduction Stochastic first-order methods have been at the core of the current success in deep learning applications. These methods are mostly well-understood for minimization problems at this point. This is even the case in the nonconvex setting where there exists matching upper and lower bounds on the complexity for finding an approximately stable point (Arjevani et al., 2019). The picture becomes less clear when moving beyond minimization into nonconvex-nonconcave minimax problems—or more generally nonmonotone variational inequalities. Even in the deterministic case, finding a stationary point is in general intractable (Daskalakis et al., 2021; Hirsch & Vavasis, 1987). This is in stark contrast with minimization where only global optimality is NP-hard. An interesting nonmonotone class for which we do have efficient algorithms is characterized by the so called weak Minty variational inequality (MVI) (Diakonikolas et al., 2021). This problem class captures nontrivial structures such as attracting limit cycles and is governed by a parameter ρ whose negativity increases the degree of nonmonotonicity. It turns out that the stepsize γ for the exploration step in extragradient-type schemes lower bounds the problem class through ρ > −γ/2 (Pethick et al., 2022). In other words, it seems that we need to take γ large to guarantee convergence for a large class. This reliance on a large stepsize is at the core of why the community has struggled to provide a stochastic variants for weak MVIs. The only known results effectively increase the batch size at every iteration (Diakonikolas et al., 2021, Thm. 4.5)—a strategy that would be prohibitively expensive in most machine learning applications. Pethick et al. (2022) proposed (SEG+) which attempts to tackle the noise by only diminishing the second stepsize. This suffices in the special case of unconstrained quadratic games but can fail even in the monotone case as illustrated in Figure 1. This naturally raises the following research question: Can stochastic weak Minty variational inequalities be solved without increasing the batch size? We resolve this open problem in the affirmative when the stochastic oracles are Lipschitz in mean, with a modification of stochastic extragradient called bias-corrected stochastic extragradient (BCSEG+). The scheme only requires one additional first order oracle call, while crucially maintaining the fixed stepsize. Specifically, we make the following contributions: ∗Laboratory for Information and Inference Systems (LIONS), EPFL ([email protected]) †Laboratoire Traitement et Communication d’Information, Télécom Paris, Institut Polytechnique de Paris ‡Department of Electrical Engineering (ESAT-STADIUS), KU Leuven (i) We show that it is possible to converge for weak MVI without increasing the batch size, by introducing a bias-correction term. The scheme introduces no additional hyperparameters and recovers the maximal range ρ ∈ (−γ/2,∞) of explicit deterministic schemes. The rate we establish is interesting already in the star-monotone case where only asymptotic convergence of the norm of the operator was known when refraining from increasing the batch size (Hsieh et al., 2020, Thm. 1). Our result additionally carries over to another class of problem treated in Appendix G, which we call negative weak MVIs. (ii) We generalize the result to a whole family of schemes that can treat constrained and regularized settings. First and foremost the class includes a generalization of the forward-backwardforward (FBF) algorithm of Tseng (2000) to stochastic weak MVIs. The class also contains a stochastic nonlinear extension of the celebrated primal dual hybrid gradient (PDHG) algorithm (Chambolle & Pock, 2011). Both methods are obtained as instantiations of the same template scheme, thus providing a unified analysis and revealing an interesting requirement on the update under weak MVI when only stochastic feedback is available. (iii) We prove almost sure convergence under the classical Robbins-Monro stepsize schedule of the second stepsize. This provides a guarantee on the last iterate, which is especially important in the nonmonotone case, where average guarantees cannot be converted into a single candidate solution. Almost sure convergence is challenging already in the monotone case where even stochastic extragradient may not converge (Hsieh et al., 2020, Fig. 1). 2 Related work Weak MVI Diakonikolas et al. (2021) was the first to observe that an extragradient-like scheme called extragradient+ (EG+) converges globally for weak MVIs with ρ ∈ (−1/8LF ,∞). This results was later tightened to ρ ∈ (−1/2LF ,∞) and extended to constrained and regularized settings in (Pethick et al., 2022). A single-call variant has been analysed in Böhm (2022). Weak MVI is a star variant of cohypomonotonicity, for which an inexact proximal point method was originally studied in Combettes & Pennanen (2004). Later, a tight characterization was carried out by Bauschke et al. (2021) for the exact case. It was shown that acceleration is achievable for an extragradient-type scheme even for cohypomonotone problems (Lee & Kim, 2021). Despite this array of positive results the stochastic case is largely untreated for weak MVIs. The only known result (Diakonikolas et al., 2021, Theorem 4.5) requires the batch size to be increasing. Similarly, the accelerated method in Lee & Kim (2021, Thm. 6.1) requires the variance of the stochastic oracle to decrease as O(1/k). Stochastic & monotone When more structure is present the story is different since diminishing stepsizes becomes permissible. In the monotone case rates for the gap function was obtained for stochastic Mirror-Prox in Juditsky et al. (2011) under bounded domain assumption, which was later relaxed for the extragradient method under additional assumptions (Mishchenko et al., 2020). The norm of the operator was shown to asymptotically converge for unconstrained MVIs in Hsieh et al. (2020) with a double stepsize policy. There exists a multitude of extensions for monotone problems: Single-call stochastic methods are covered in detail by Hsieh et al. (2019), variance reduction was applied to Halpern-type iterations (Cai et al., 2022), cocoercivity was used in Beznosikov et al. (2022), and bilinear games studied in Li et al. (2022). Beyond monotonicity, a range of structures have been explored such as MVIs (Song et al., 2020), pseudomonotonicity (Kannan & Shanbhag, 2019; Boţ et al., 2021), two-sided Polyak-Łojasiewicz condition (Yang et al., 2020), expected cocoercivity (Loizou et al., 2021), sufficiently bilinear (Loizou et al., 2020), and strongly star-monotone (Gorbunov et al., 2022). Variance reduction The assumptions we make about the stochastic oracle in Section 3 are similar to what is found in the variance reduction literature (see for instance Alacaoglu & Malitsky (2021, Assumption 1) or Arjevani et al. (2019)). However, our use of the assumption are different in a crucial way. Whereas the variance reduction literature uses the stepsize γ ∝ 1/LF̂ (see e.g. Alacaoglu & Malitsky (2021, Theorem 2.5)), we aim at using the much larger γ ∝ 1/LF . For instance, in the special case of a finite sum problem of size N, the mean square smoothness constant LF̂ from Assumption III can be √ N times larger than LF (see Appendix I for details). This would lead to a prohibitively strict requirement on the degree of allowed nonmonotonicity through the relationship ρ > −γ/2. Bias-correction The idea of adding a correction term has also been exploited in minimization, specifically in the context of compositional optimization Chen et al. (2021). Due to their distinct problem setting it suffices to simply extend stochastic gradient descent (SGD), albeit under additional assumptions such as (Chen et al., 2021, Assumption 3). In our setting, however, SGD is not possible even when restricting ourselves to monotone problems. 3 Problem formulation and preliminaries We are interested in finding z ∈ n such that the following inclusion holds, 0 ∈ Tz := Az + Fz. (3.1) A wide range of machine learning applications can be cast as an inclusion. Most noticeable, a structured minimax problem can be reduced to (3.1) as shown in Section 8.1. We will rely on common notation and concepts from monotone operators (see Appendix B for precise definitions). Assumption I. In problem (3.1), (i) The operator F : n → n is LF-Lipschitz with LF ∈ [0,∞), i.e., ∥Fz − Fz′∥ ≤ LF∥z − z′∥ ∀z, z′ ∈ n. (3.2) (ii) The operator A : n ⇒ n is a maximally monotone operator. (iii) Weak Minty variational inequality (MVI) holds, i.e., there exists a nonempty set S⋆ ⊆ zer T such that for all z⋆ ∈ S⋆ and some ρ ∈ (− 12LF ,∞) ⟨v, z − z⋆⟩ ≥ ρ∥v∥2, for all (z, v) ∈ gph T. (3.3) Remark 1. In the unconstrained and smooth case (A ≡ 0), Assumption I(iii) reduces to ⟨Fz, z−z⋆⟩ ≥ ρ∥Fz∥2 for all z ∈ n. When ρ = 0 this condition reduces to the MVI (i.e. star-monotonicity), while negative ρ makes the problem increasingly nonmonotone. Interestingly, the inequality is not symmetric and one may instead consider that the assumption holds for −F. Through this observation, Appendix G extends the reach of the extragradient-type algorithms developed for weak MVIs. Stochastic oracle We assume that we cannot compute Fz easily, but instead we have access to the stochastic oracle F̂(z, ξ), which we assume is unbiased with bounded variance. We additionally assume that z 7→ F̂(z, ξ) is LF̂ Lipschitz continuous in mean and that it can be simultaneously queried under the same randomness. Assumption II. For the operator F̂(·, ξ) : n → n the following holds. (i) Two-point oracle: The stochastic oracle can be queried for any two points z, z′ ∈ n, F̂(z, ξ), F̂(z′, ξ) where ξ ∼ P. (3.4) (ii) Unbiased: Eξ [ F̂(z, ξ) ] = Fz ∀z ∈ n. (iii) Bounded variance: Eξ [ ∥F̂(z, ξ) − F̂(z)∥2 ] ≤ σ2F ∀z ∈ n. Assumption III. The operator F̂(·, ξ) : n → n is Lipschitz continuous in mean with LF̂ ∈ [0,∞): Eξ [ ∥F̂(z, ξ) − F̂(z′, ξ)∥2 ] ⩽ L2 F̂ ∥z − z′∥2 for all z, z′ ∈ n. (3.5) Remark 2. Assumptions II(i) and III are also common in the variance reduction literature (Fang et al., 2018; Nguyen et al., 2019; Alacaoglu & Malitsky, 2021), but in contrast with variance reduction we will not necessarily need knowledge of LF̂ to specify the algorithm, in which case the problem constant will only affect the complexity. Crucially, this decoupling of the stepsize from LF̂ will allow the proposed scheme to converge for a larger range of ρ in Assumption I(iii). Finally, note that Assumption II(i) commonly holds in machine learning applications, where usually the stochasticity is induced by the sampled mini-batch. 4 Method To arrive at a stochastic scheme for weak MVI we first need to understand the crucial ingredients in the deterministic setting. For simplicity we will initially consider the unconstrained and smooth Algorithm 1 (BC-SEG+) Stochastic algorithm for problem (3.1) when A ≡ 0 Require z−1 = z̄−1 = z0 ∈ n αk ∈ (0, 1), γ ∈ (⌊−2ρ⌋+, 1/LF) Repeat for k = 0, 1, . . . until convergence 1.1: Sample ξk ∼ P 1.2: z̄k = zk − γF̂(zk, ξk) + (1 − αk) ( z̄k−1 − zk−1 + γF̂(zk−1, ξk) ) 1.3: Sample ξ̄k ∼ P 1.4: zk+1 = zk − αkγF̂(z̄k, ξ̄k) Return zk+1 setting, i.e. A ≡ 0 in (3.1). The first component is taking the second stepsize α smaller as done in extragradient+ (EG+), z̄k = zk − γFzk zk+1 = zk − αγFz̄k (EG+) where α ∈ (0, 1). Convergence in weak MVI was first shown in Diakonikolas et al. (2021) and later tightened by Pethick et al. (2022), who characterized that smaller α allows for a larger range of the problem constant ρ. Taking α small is unproblematic for a stochastic scheme where usually the stepsize is taken diminishing regardless. However, Pethick et al. (2022) also showed that the extrapolation stepsize γ plays a critical role for convergence under weak MVI. Specifically, they proved that a larger stepsize γ leads to a looser bound on the problem class through ρ > −γ/2. While a lower bound has not been established we provide an example in Figure 3 of Appendix H where small stepsize prevents convergence. Unfortunately, picking γ large (e.g. as γ = 1/LF) causes significant complications in the stochastic case where both stepsizes are usually taken to be diminishing as in the following scheme, z̄k = zk − βkγF̂(zk, ξk) with ξk ∼ P zk+1 = zk − αkγF̂(z̄k, ξ̄k) with ξ̄k ∼ P (SEG) where αk = βk ∝ 1/k. Even with a two-timescale variant (when βk > αk) it has only been possible to show convergence for MVI (i.e. when ρ = 0) (Hsieh et al., 2020). Instead of decreasing both stepsizes, Pethick et al. (2022) proposes a scheme that keeps the first stepsize constant, z̄k = zk − γF̂(zk, ξk) with ξk ∼ P zk+1 = zk − αkγF̂(z̄k, ξ̄k) with ξ̄k ∼ P (SEG+) However, (SEG+) does not necessarily converge even in the monotone case as we illustrate in Figure 1. The non-convergence stems from the bias term introduced by the randomness of z̄k in F̂(z̄k, ξ̄k). Intuitively, the role of z̄k is to approximate the deterministic exploration step ˜̄zk := zk − γFzk. While z̄k is an unbiased estimate of ˜̄zk this does not imply that F̂(z̄k, ξ̄k) is an unbiased estimate of F(˜̄zk). Unbiasedness only holds in special cases, such as when F is linear and A ≡ 0 for which we show convergence of (SEG+) in Section 5 under weak MVI. In the monotone case it suffice to take the exploration stepsize γ diminishing (Hsieh et al., 2020, Thm. 1), but this runs counter to the fixed stepsize requirement of weak MVI. Instead we propose bias-corrected stochastic extragradient+ (BC-SEG+) in Algorithm 1. BC-SEG+ adds a bias correction term of the previous operator evaluation using the current randomness ξk. This crucially allows us to keep the first stepsize fixed. We further generalize this scheme to constrained and regularized setting with Algorithm 2 by introducing the use of the resolvent, (id + γA)−1. 5 Analysis of SEG+ In the special case where F is affine and A ≡ 0 we can show convergence of (SEG+) under weak MVI up to arbitrarily precision even with a large stepsize γ. Theorem 5.1. Suppose that Assumptions I and II hold. Assume Fz := Bz + v and choose αk ∈ (0, 1) and γ ∈ (0, 1/LF) such that ρ ≥ γ(αk − 1)/2. Consider the sequence (zk)k∈ generated by (SEG+). Then for all z⋆ ∈ S⋆, K∑ k=0 αk∑K j=0 α j E∥Fzk∥2 ≤ ∥z 0−z⋆∥2+γ2(γ2L2F+1)σ2F ∑K j=0 α 2 j γ2(1−γ2L2F ) ∑K j=0 α j . (5.1) The underlying reason for this positive results is that F̂(z̄k, ξ̄k) is unbiased when F is linear. This no longer holds when either linearity of F is dropped or when the resolvent is introduced for A . 0, in which case the scheme only converges to a γ-dependent neighborhood as illustrated in Figure 1. This is problematic in weak MVI where γ cannot be taken arbitrarily small (see Figure 3 of Appendix H). 6 Analysis for unconstrained and smooth case For simplicity we first consider the case where A ≡ 0. To mitigate the bias introduced in F(z̄k, ξ̄k) for (SEG+), we propose Algorithm 1 which modifies the exploration step. The algorithm can be seen as a particular instance of the more general scheme treated in Section 7. Theorem 6.1. Suppose that Assumptions I to III hold. Suppose in addition that γ ∈ (⌊−2ρ⌋+, 1/LF) and (αk)k∈ ⊂ (0, 1) is a diminishing sequence such that 2γLF̂ √ α0 + ( 1 + ( 1+γ2L2F 1−γ2L2F γ2L2F ) γ2L2 F̂ ) α0 ≤ 1 + 2ργ . (6.1) Then, the following estimate holds for all z⋆ ∈ S⋆ E[∥F(zk⋆ )∥2] ≤ (1 + ηγ2L2F)∥z0 − z⋆∥2 +Cσ2Fγ2 ∑K j=0 α 2 j µ ∑K j=0 α j (6.2) where C = 1 + 2η ( (γ2L2 F̂ + 1) + 2α0 ) , η = 12 1+γ2L2F 1−γ2L2F γ2L2F + 1 γLF̂ √ α0 , µ = γ2(1 − γ2L2F)/2 and k⋆ is chosen from {0, 1, . . . ,K} according to probability P[k⋆ = k] = αk∑K j=0 α j . Remark 6.2. As α0 → 0, the requirement (6.1) reduces to ρ > −γ/2 as in the deterministic setting of Pethick et al. (2022). Letting αk = α0/ √ k+r the rate becomes O(1/√k), thus matching the rate for the gap function of stochastic extragradient in the monotone case (see e.g. Juditsky et al. (2011)). The above result provides a rate for a random iterate as pioneered by Ghadimi & Lan (2013). Showing last iterate results even asymptotically is more challenging. Already in the monotone case, vanilla (SEG) (where βk = αk) only has convergence guarantees for the average iterate (Juditsky et al., 2011). In fact, the scheme can cycle even in simple examples (Hsieh et al., 2020, Fig. 1). Under the classical (but more restrictive) Robbins-Monro stepsize policy, it is possible to show almost sure convergence for the iterates generates by Algorithm 1. The following theorem demonstrates the result in the particular case of αk = 1/k+r. The more general statement is deferred to Appendix D. Theorem 6.3 (almost sure convergence). Suppose that Assumptions I to III hold. Suppose γ ∈ (⌊−2ρ⌋+, 1/LF), αk = 1k+r for any positive natural number r and (γLF̂ + 1)αk + 2 ( 1+γ2L2F 1−γ2L2F γ4L2F L 2 F̂ αk+1 + γLF̂ ) (αk+1 + 1)αk+1 ≤ 1 + 2ργ . (6.3) Algorithm 2 (BC-PSEG+) Stochastic algorithm for problem (3.1) Require z−1 = z0 ∈ n, h−1 ∈ n, αk ∈ (0, 1), γ ∈ (⌊−2ρ⌋+, 1/LF) Repeat for k = 0, 1, . . . until convergence 2.1: Sample ξk ∼ P 2.2: hk = ( zk − γF̂(zk, ξk) ) + (1 − αk) ( hk−1 − (zk−1 − γF̂(zk−1, ξk))) 2.3: z̄k = (id + γA)−1hk 2.4: Sample ξ̄k ∼ P 2.5: zk+1 = zk − αk ( hk − z̄k + γF̂(z̄k, ξ̄k) ) Return zk+1 Then, the sequence (zk)k∈ generated by Algorithm 1 converges almost surely to some z ⋆ ∈ zer T. Remark 6.4. As αk → 0 the condition on ρ reduces to ρ > −γ/2 like in the deterministic case. To make the results more accessible, both theorems have made particular choices of the free parameters from the proof, that ensures convergence for a given ρ and γ. However, since the parameters capture inherent tradeoffs, the choice above might not always provide the tightest rate. Thus, the more general statements of the theorems have been preserved in the appendix. 7 Analysis for constrained case The result for the unconstrained smooth case can be extended when the resolvent is available. Algorithm 2 provides a direct generalization of the unconstrained Algorithm 1. The construction relies on approximating the deterministic algorithm proposed in Pethick et al. (2022), which iteratively projects onto a half-space which is guaranteed to contain the solutions. By defining Hz = z − γFz, the scheme can concisely be written as, z̄k = (I + γA)−1(Hzk) zk+1 = zk − αk(Hzk − Hz̄k), (CEG+) for a particular adaptive choice of αk ∈ (0, 1). With a fair amount of hindsight we choose to replace Hzk with the bias-corrected estimate hk (as defined in Step 2.2 in Algorithm 2), such that the estimate is also reused in the second update. Theorem 7.1. Suppose that Assumptions I to III hold. Moreover, suppose that αk ∈ (0, 1), γ ∈ (⌊−2ρ⌋+, 1/LF) and the following holds, µ B 1−√α0 1+ √ α0 − α0(1 + 2γ2L2F̂η) + 2ρ γ > 0 (7.1) where η ≥ 1√ α0(1−γ2L2F ) + 1−√α0√ α0 . Consider the sequence (zk)k∈ generated by Algorithm 2. Then, the following estimate holds for all z⋆ ∈ S⋆ E[dist(0,T z̄k⋆ )2] ≤ E[∥z0 − z⋆∥2] + ηE[∥h−1 − Hz−1∥2] +Cγ2σ2F ∑K j=0 α 2 j γ2µ ∑K j=0 α j where C = 1 + 2η(1 + γ2L2 F̂ ) + 2α0η and k⋆ is chosen from {0, 1, . . . ,K} according to probability P[k⋆ = k] = αk∑K j=0 α j . Remark 3. The condition on ρ in (7.1) reduces to ρ > −γ/2 when α0 → 0 as in the deterministic case. As oppose to Theorem 6.3 which tracks ∥Fzk∥2, the convergence measure of Theorem 7.1 reduces to dist(0,T z̄k)2 = ∥Fz̄k∥2 when A ≡ 0. Since Algorithm 1 and Algorithm 2 coincide when A ≡ 0, Theorem 7.1 also applies to Algorithm 1 in the unconstrained case. Consequently, we obtain rates for both ∥Fz̄k∥2 and ∥Fzk∥2 in the unconstrained smooth case. 8 Asymmetric & nonlinear preconditioning In this section we show that the family of stochastic algorithms which converges under weak MVI can be expanded beyond Algorithm 2. This is achieved by extending (CEG+) through introducing Algorithm 3 Nonlinearly preconditioned primal dual extragradient (NP-PDEG) for solving (8.5) Require z−1 = z0 = (x0, y0) with x0, x−1, x̂−1, x̄−1 ∈ n, y0, y−1 ∈ r, θ ∈ [0,∞), Γ1 ≻ 0, Γ2 ≻ 0 Repeat for k = 0, 1, . . . until convergence 3.1: ξk ∼ P 3.2: x̂k = xk − Γ1∇xφ̂(zk, ξk) + (1 − αk) ( x̂k−1 − xk−1 + Γ1∇xφ̂(xk−1, yk−1, ξk) ) 3.3: x̄k = proxΓ −1 1 f ( x̂k ) 3.4: ξ′k ∼ P 3.5: ŷk = yk + Γ2 ( θ∇yφ̂(x̄k, yk, ξ′k) + (1 − θ)∇yφ̂(zk, ξk) ) 3.6: +(1 − αk) ( ŷk−1 − yk−1 − Γ2 ( θ∇yφ̂(x̄k−1, yk−1, ξ′k) + (1 − θ)∇yφ̂(zk−1, ξk) )) 3.7: ȳk = proxΓ −1 2 g ( ŷk ) 3.8: ξ̄k ∼ P 3.9: xk+1 = xk + αk ( x̄k − x̂k − Γ1∇xφ̂(z̄k, ξ̄k) ) 3.10: yk+1 = yk + αk ( ȳk − ŷk + Γ2∇yφ̂(z̄k, ξ̄k) ) Return zk+1 = (xk+1, yk+1) a nonlinear and asymmetrical preconditioning. Asymmetrical preconditioning has been used in the literature to unify a large range of algorithm in the monotone setting Latafat & Patrinos (2017). A subtle but crucial difference, however, is that the preconditioning considered here depends nonlinearly on the current iterate. As it will be shown in Section 8.1 this nontrivial feature is the key for showing convergence for primal-dual algorithms in the nonmonotone setting. Consider the following generalization of (CEG+) by introducing a potentially asymmetric nonlinear preconditioning Pzk that depends on the current iterate zk. find z̄k such that Hzk (zk) ∈ Pzk (z̄k) + A(z̄k), (8.1a) update zk+1 = zk + αΓ ( Hzk (z̄k) − Hzk (zk) ) . (8.1b) where Hu(v) B Pu(v) − F(v) and Γ is some positive definite matrix. The iteration independent and diagonal choice Pzk = γ−1I and Γ = γI correspond to the basic (CEG+). More generally we consider Pu(z) B Γ−1z + Qu(z) (8.2) where Qu(z) captures the nonlinear and asymmetric part, which ultimately enables alternating updates and relaxing the Lipschitz conditions (see Remark 8.1(ii)). Notice that the iterates above does not always yield well-defined updates and one must inevitably impose additional structures on the preconditioner (we provide sufficient condition in Appendix F.1). Consistently with (8.2), in the stochastic case we define P̂u(z, ξ) B Γ−1z + Q̂u(z, ξ). (8.3) The proposed stochastic scheme, which introduces a carefully chosen bias-correction term, is summarized as compute hk = P̂zk (zk, ξk) − F̂(zk, ξk) + (1 − αk) ( hk−1 − P̂zk−1 (zk−1, ξk) + F̂(zk−1, ξk) (8.4a) − Q̂zk−1 (z̄k−1, ξ′k−1) + Q̂zk−1 (z̄k−1, ξ′k) ) with ξk, ξ′k ∼ P find z̄k such that hk ∈ P̂zk (z̄k, ξ′k) + Az̄k (8.4b) update zk+1 = zk + αkΓ ( P̂zk (z̄k, ξ̄k) − F̂(z̄k, ξ̄k) − hk ) with ξ̄k ∼ P (8.4c) Remark 4. The two additional terms in (8.4a) are due to the interesting interplay between weak MVI and stochastic feedback, which forces a change of variables (see Appendix F.4). To make a concrete choice of Q̂u(z, ξ) we will consider a minimax problem as a motivating example (see Appendix F.1 for a more general setup). 8.1 Nonlinearly preconditioned primal dual hybrid gradient We consider the problem of minimize x∈ n maximize y∈ r f (x) + φ(x, y) − g(y). (8.5) where φ(x, y) := Eξ[φ̂(x, y, ξ)]. The first order optimality conditions may be written as the inclusion 0 ∈ Tz B Az + Fz, where A = (∂ f , ∂g), F(z) = (∇xφ(z),−∇yφ(z)), (8.6) while the algorithm only has access to the stochastic estimates F̂(z, ξ) B (∇xφ̂(z, ξ),−∇yφ̂(z, ξ)). Assumption IV. For problem (8.5), let the following hold with a stepsize matrix Γ = blkdiag(Γ1,Γ2) where Γ1 ∈ n and Γ2 ∈ r are symmetric positive definite matrices: (i) f , g are proper lsc convex (ii) φ : n+r → is continuously differentiable and for some symmetric positive definite matrices Dxx,Dxy,Dyx,Dyy, the following holds for all z = (x, y), z′ = (x′, y′) ∈ n+r ∥∇xφ(z′) − ∇xφ(z)∥2Γ1 ≤ L 2 xx∥x′ − x∥2Dxx + L 2 xy∥y′ − y∥2Dxy , ∥∇yφ(z′) − θ∇yφ(x′, y) − (1 − θ)∇yφ(z)∥2Γ2 ≤ L 2 yx∥x′ − x∥2Dyx + L 2 yy∥y′ − y∥2Dyy . (iii) Stepsize condition: L2xxDxx + L 2 yxDyx ≺ Γ−11 and L2xyDxy + L2yyDyy ≺ Γ−12 . (iv) Bounded variance: Eξ [ ∥F̂(z, ξ) − F̂(z′, ξ)∥2 Γ ] ≤ σ2F ∀z, z′ ∈ n. (v) φ̂(·, ξ) : n+r → is continuously differentiable and for some symmetric positive definite matrices Dx̂z,Dŷz,Dŷx,Dŷy, the following holds for all z = (x, y), z′ = (x′, y′) ∈ n+r and v, v′ ∈ n for θ ∈ [0,∞): Eξ [ ∥∇xφ̂(z′, ξ) − ∇xφ̂(z, ξ)∥2Γ1 ] ≤ L2x̂z∥z ′ − z∥2Dx̂z if θ , 1: Eξ [ ∥∇yφ̂(z, ξ) − ∇yφ̂(z′, ξ)∥2Γ2 ] ≤ L2ŷz∥z ′ − z∥2Dŷz if θ , 0: Eξ [ ∥∇yφ̂(v′, y′, ξ) − ∇yφ̂(v, y, ξ)∥2Γ2 ] ≤ L2ŷx∥v ′ − v∥2Dŷx + L 2 ŷy∥y ′ − y∥2Dŷy . Remark 8.1. In Algorithm 3 the choice of θ ∈ [0,∞) leads to different algorithmic oracles and underlying assumptions in terms of Lipschitz continuity in Assumptions IV(ii) and IV(v). (i) If θ = 0 then the first two steps may be computed in parallel and we recover Algorithm 2. Moreover, to ensure Assumption IV(ii) in this case it suffices to assume for Lx, Ly ∈ [0,∞), ∥∇xφ(z′) − ∇xφ(z)∥ ≤ Lx∥z′ − z∥, ∥∇yφ(z′) − ∇yφ(z)∥ ≤ Ly∥z′ − z∥. (ii) Taking θ = 1 leads to Gauss-Seidel updates and a nonlinear primal dual extragradient algorithm with sufficient Lipschitz continuity assumptions for some Lx, Ly ∈ [0,∞), ∥∇xφ(z′) − ∇xφ(z)∥ ≤ Lx∥z′ − z∥, ∥∇yφ(z′) − ∇yφ(x′, y)∥ ≤ Ly∥y′ − y∥. Algorithm 3 is an application of (8.4) applied for solving (8.6). In order to cast the algorithm as an instance of the template algorithm (8.4), we choose the positive definite stepsize matrix as Γ = blkdiag(Γ1,Γ2) with Γ1 ≻ 0, Γ2 ≻ 0, and the nonlinear part of the preconditioner as Q̂u(z̄, ξ) B ( 0,−θ∇yφ̂(x̄, y, ξ) ) , and Qu(z̄) B ( 0,−θ∇yφ(x̄, y) ) (8.7) where u = (x, y) and z̄ = (x̄, ȳ). Recall Hu(z) B Pu(z) − F(z) and define S u(z; z̄) B Hu(z) − Qu(z̄). The convergence in Theorem 8.2 depends on the distance between the initial estimate Γ−1ẑ−1 with ẑ−1 = (x̂−1, ŷ−1) and the deterministic S z−1 (z−1; z̄−1). See Appendix B for additional notation. Theorem 8.2. Suppose that Assumption I(iii) to II(ii) and IV hold. Moreover, suppose that αk ∈ (0, 1), θ ∈ [0,∞) and the following holds, µ B 1−√α0 1+ √ α0 + 2ρ γ̄ − α0 − 2α0(ĉ1 + 2ĉ2(1 + ĉ3))η > 0 and 1 − 4ĉ2α0 > 0 (8.8) where γ̄ denotes the smallest eigenvalue of Γ, η ≥ (1 + 4ĉ2α20)( 1√ α0(1−LM )2 + 1−√α0√ α0 )/(1 − 4ĉ2α0) and ĉ1 B L2x̂z∥ΓDx̂z∥ + 2(1 − θ) 2L2ŷz∥ΓDŷz∥ + 2θ 2L2ŷy∥Γ2Dŷy∥, ĉ2 B 2θ 2L2ŷx∥Γ1Dŷx∥, ĉ3 B L 2 x̂z∥ΓDx̂z∥, L2M B max { L2xx∥DxxΓ1∥ + L2yx∥DyxΓ1∥, ∥L2xy∥DxyΓ2∥ + L2yy∥DyyΓ2∥ } . Consider the sequence (zk)k∈ generated by Algorithm 3. Then, the following holds for all z ⋆ ∈ S⋆ E[distΓ(0,T z̄k⋆ )2] ≤ E[∥z0 − z⋆∥2 Γ−1 ] + ηE[∥Γ−1ẑ−1 − S z−1 (z−1; z̄−1)∥2Γ] +Cσ2F ∑K j=0 α 2 j µ ∑K j=0 α j where C B 2(η+α0( 1√α0(1−LM )2 + 1−√α0√ α0 ))(1+ 2ĉ2)+ 1+ 2(ĉ1 + 2ĉ2(Θ+ ĉ3))η with Θ = (1− θ)2 + 2θ2 and k⋆ is chosen from {0, 1, . . . ,K} according to probability P[k⋆ = k] = αk∑K j=0 α j . Remark 5. When α0 → 0 the conditions in (8.2) reduces to 1 + 2ργ̄ > 0 as in the deterministic case. For θ = 0 Algorithm 3 reduces to Algorithm 2. With this choice Theorem 8.2 simplifies, since the constant ĉ2 = 0, and we recover the convergence result of Theorem 7.1. 9 Experiments We compare BC-SEG+ and BC-PSEG+ against (EG+) using stochastic feedback (which we refer to as (SF-EG+)) and (SEG) in both an unconstrained setting and a constrained setting introduced in Pethick et al. (2022). See Appendix H.2 for the precise formulation of the projected variants which we denote (SF-PEG+) and (PSEG) respectively. In the unconstrained example we control all problem constant and set ρ = −1/10LF , while the constrained example is a specific minimax problem where ρ > −1/2LF holds within the constrained set for a Lipschitz constant LF restricted to the same constrained set. To simulate a stochastic setting in both examples, we consider additive Gaussian noise, i.e. F̂(z, ξ) = Fz + ξ where ξ ∼ N(0, σ2I). In the experiments we choose σ = 0.1 and αk ∝ 1/k, which ensures almost sure convergence of BC-(P)SEG+. For a more aggressive stepsize choice αk ∝ 1/ √ k see Figure 4. Further details can be found in Appendix H. The results are shown in Figure 2. The sequence generated by (SEG) and (PSEG) diverges for the unconstrained problem and cycles in the constrained problem respectively. In comparison (SF-EG+) and (SF-PEG+) gets within a neighborhood of the solutions but fails to converge due to the nondiminishing stepsize, while BC-SEG+ and BC-PSEG+ converges in the examples. 10 Conclusion This paper shows that nonconvex-nonconcave problems characterize by the weak Minty variational inequality can be solved efficiently even when only stochastic gradients are available. The approach crucially avoids increasing batch sizes by instead introducing a bias-correction term. We show that convergence is possible for the same range of problem constant ρ ∈ (−γ/2,∞) as in the deterministic case. Rates are established for a random iterate, which matches those of stochastic extragradient in the monotone case, and the result is complemented with almost sure convergence, thus providing asymptotic convergence for the last iterate. We show that the idea extends to a family of extragradient-type methods which includes a nonlinear extension of the celebrated primal dual hybrid gradient (PDHG) algorithm. For future work it is interesting to see if the rate can be improved by considering accelerated methods such as Halpern iterations. 11 Acknowledgments and disclosure of funding This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement n° 725594 - time-data). This work was supported by the Swiss National Science Foundation (SNSF) under grant number 200021_205011. The work of the third and fourth author was supported by the Research Foundation Flanders (FWO) postdoctoral grant 12Y7622N and research projects G081222N, G033822N, G0A0920N; Research Council KU Leuven C1 project No. C14/18/068; European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 953348. The work of Olivier Fercoq was supported by the Agence National de la Recherche grant ANR-20-CE40-0027, Optimal Primal-Dual Algorithms (APDO). References Ahmet Alacaoglu and Yura Malitsky. Stochastic variance reduction for variational inequality methods. arXiv preprint arXiv:2102.08352, 2021. Yossi Arjevani, Yair Carmon, John C Duchi, Dylan J Foster, Nathan Srebro, and Blake Woodworth. Lower bounds for non-convex stochastic optimization. arXiv preprint arXiv:1912.02365, 2019. Heinz H. Bauschke and Patrick L. Combettes. Convex analysis and monotone operator theory in Hilbert spaces. CMS Books in Mathematics. Springer, 2017. ISBN 978-3-319-48310-8. Heinz H Bauschke, Walaa M Moursi, and Xianfu Wang. Generalized monotone operators and their averaged resolvents. Mathematical Programming, 189(1):55–74, 2021. Dimitri P. Bertsekas. Incremental proximal methods for large scale convex optimization. Mathematical programming, 129(2):163–195, 2011. Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, and Nicolas Loizou. Stochastic gradient descent-ascent: Unified theory and new efficient methods. arXiv preprint arXiv:2202.07262, 2022. Axel Böhm. Solving nonconvex-nonconcave min-max problems exhibiting weak minty solutions. arXiv preprint arXiv:2201.12247, 2022. Radu Ioan Boţ, Panayotis Mertikopoulos, Mathias Staudigl, and Phan Tu Vuong. Minibatch forwardbackward-forward methods for solving stochastic variational inequalities. Stochastic Systems, 11 (2):112–139, 2021. Xufeng Cai, Chaobing Song, Cristóbal Guzmán, and Jelena Diakonikolas. A stochastic Halpern iteration with variance reduction for stochastic monotone inclusion problems. arXiv preprint arXiv:2203.09436, 2022. A. Chambolle and T. Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. Journal of Mathematical Imaging and Vision, 40(1):120–145, 2011. Tianyi Chen, Yuejiao Sun, and Wotao Yin. Solving stochastic compositional optimization is nearly as easy as solving stochastic optimization. IEEE Transactions on Signal Processing, 69:4937– 4948, 2021. Patrick L Combettes and Teemu Pennanen. Proximal methods for cohypomonotone operators. SIAM journal on control and optimization, 43(2):731–742, 2004. Constantinos Daskalakis, Stratis Skoulakis, and Manolis Zampetakis. The complexity of constrained min-max optimization. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pp. 1466–1478, 2021. Jelena Diakonikolas, Constantinos Daskalakis, and Michael Jordan. Efficient methods for structured nonconvex-nonconcave min-max optimization. In International Conference on Artificial Intelligence and Statistics, pp. 2746–2754. PMLR, 2021. Cong Fang, Chris Junchi Li, Zhouchen Lin, and Tong Zhang. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. Advances in Neural Information Processing Systems, 31, 2018. Saeed Ghadimi and Guanghui Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM Journal on Optimization, 23(4):2341–2368, 2013. Eduard Gorbunov, Hugo Berard, Gauthier Gidel, and Nicolas Loizou. Stochastic extragradient: General analysis and improved rates. In International Conference on Artificial Intelligence and Statistics, pp. 7865–7901. PMLR, 2022. M Hirsch and S Vavasis. Exponential lower bounds for finding Brouwer fixed points. In Proceedings of the 28th Symposium on Foundations of Computer Science, pp. 401–410, 1987. Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. On the convergence of single-call stochastic extra-gradient methods. Advances in Neural Information Processing Systems, 32, 2019. Yu-Guan Hsieh, Franck Iutzeler, Jérôme Malick, and Panayotis Mertikopoulos. Explore aggressively, update conservatively: Stochastic extragradient methods with variable stepsize scaling. arXiv preprint arXiv:2003.10162, 2020. Anatoli Juditsky, Arkadi Nemirovski, and Claire Tauvel. Solving variational inequalities with stochastic mirror-prox algorithm. Stochastic Systems, 1(1):17–58, 2011. Aswin Kannan and Uday V Shanbhag. Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants. Computational Optimization and Applications, 74(3):779–820, 2019. Puya Latafat and Panagiotis Patrinos. Asymmetric forward–backward–adjoint splitting for solving monotone inclusions involving three operators. Computational Optimization and Applications, 68(1):57–93, Sep 2017. Sucheol Lee and Donghwan Kim. Fast extra gradient methods for smooth structured nonconvexnonconcave minimax problems. arXiv preprint arXiv:2106.02326, 2021. Chris Junchi Li, Yaodong Yu, Nicolas Loizou, Gauthier Gidel, Yi Ma, Nicolas Le Roux, and Michael Jordan. On the convergence of stochastic extragradient for bilinear games using restarted iteration averaging. In International Conference on Artificial Intelligence and Statistics, pp. 9793–9826. PMLR, 2022. Nicolas Loizou, Hugo Berard, Alexia Jolicoeur-Martineau, Pascal Vincent, Simon Lacoste-Julien, and Ioannis Mitliagkas. Stochastic hamiltonian gradient methods for smooth games. In International Conference on Machine Learning, pp. 6370–6381. PMLR, 2020. Nicolas Loizou, Hugo Berard, Gauthier Gidel, Ioannis Mitliagkas, and Simon Lacoste-Julien. Stochastic gradient descent-ascent and consensus optimization for smooth games: Convergence analysis under expected co-coercivity. Advances in Neural Information Processing Systems, 34: 19095–19108, 2021. Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, and Yura Malitsky. Revisiting stochastic extragradient. In International Conference on Artificial Intelligence and Statistics, pp. 4573–4582. PMLR, 2020. Lam M Nguyen, Marten van Dijk, Dzung T Phan, Phuong Ha Nguyen, Tsui-Wei Weng, and Jayant R Kalagnanam. Finite-sum smooth optimization with SARAH. arXiv preprint arXiv:1901.07648, 2019. Thomas Pethick, Puya Latafat, Panagiotis Patrinos, Olivier Fercoq, and Volkan Cevher. Escaping limit cycles: Global convergence for constrained nonconvex-nonconcave minimax problems. In International Conference on Learning Representations, 2022. Ralph Tyrell Rockafellar. Convex analysis. Princeton University Press, 1970. Chaobing Song, Zhengyuan Zhou, Yichao Zhou, Yong Jiang, and Yi Ma. Optimistic dual extrapolation for coherent non-monotone variational inequalities. Advances in Neural Information Processing Systems, 33:14303–14314, 2020. P. Tseng. A modified forward-backward splitting method for maximal monotone mappings. SIAM Journal on Control and Optimization, 38(2):431–446, 2000. Junchi Yang, Negar Kiyavash, and Niao He. Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems. arXiv preprint arXiv:2002.09621, 2020. Appendix Table of Contents A Prelude 14 B Preliminaries 14 C Proof for SEG+ 15 Proof of Theorem 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 D Proof for smooth unconstrained case 16 Proof of Theorem D.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Proof of Theorem 6.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Proof of Theorem D.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Proof of Theorem 6.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 E Proof for constrained case 21 Proof of Theorem E.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Proof of Theorem 7.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 F Proof for NP-PDEG through a nonlinear asymmetric preconditioner 23 F.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 F.2 Deterministic lemmas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 F.3 Stochastic results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Proof of Theorem F.5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Proof of Theorem 8.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 F.4 Explanation of bias-correction term . . . . . . . . . . . . . . . . . . . . . . . . . 30 G Negative weak Minty variational inequality 31 H Experiments 32 H.1 Synthetic example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 H.2 Additional algorithmic details . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 I Comparison with variance reduction 34 A Prelude For the unconstrained and smooth setting Appendix C treats convergences of (SEG+) for the restricted case where F is linear. Appendix D shows both random iterate results and almost sure convergence of Algorithm 1. Theorems 6.1 and 6.3 in the main body are implied by the more general results in this section, which preserves certain free parameters and more general stepsize requirements. Appendices E and F moves beyond the unconstrained and smooth case by showing convergence for instances of the template scheme (8.1). The analysis of Algorithm 3 in Appendix F applies to Algorithm 2, but for completeness we establish convergence for general F separately in Appendix E. The relationship between the theorems are presented in Table 1. B Preliminaries Given a psd matrix V we define the inner product as ⟨·, ·⟩V B ⟨·,V ·⟩ and the corresponding norm ∥ · ∥ B √ ⟨·, ·⟩V . The distance from u ∈ n to a setU ⊆ n with respect to a positive definite matrix V is defined as distV (u,U) B minu′∈U ∥u − u′∥V , which we simply denote dist(u,U) when V = I. The norm ∥X∥ refers to spectral norm when X is a matrix. We summarize essential definitions from operator theory, but otherwise refer to Bauschke & Combettes (2017); Rockafellar (1970) for further details. An operator A : n ⇒ d maps each point x ∈ n to a subset Ax ⊆ d, where the notation A(x) and Ax will be used interchangably. We denote the domain of A by dom A B {x ∈ n | Ax , ∅}, its graph by gph A B {(x, y) ∈ n × d | y ∈ Ax}. The inverse of A is defined through its graph, gph A−1 B {(y, x) | (x, y) ∈ gph A} and the set of its zeros by zer A B {x ∈ n | 0 ∈ Ax}. Definition B.1 ((co)monotonicity Bauschke et al. (2021)). An operator A : n ⇒ n is said to be ρ-monotone for some ρ ∈ , if for all (x, y), (x′, y′) ∈ gph A ⟨y − y′, x − x′⟩ ≥ ρ∥x − x′∥2, and it is said to be ρ-comonotone if for all (x, y), (x′, y′) ∈ gph A ⟨y − y′, x − x′⟩ ≥ ρ∥y − y′∥2. The operator A is said to be maximally (co)monotone if there exists no other (co)monotone operator B for which gph A ⊂ gph B properly. If A is 0-monotone we simply say it is monotone. When ρ < 0, ρ-comonotonicity is also referred to as |ρ|-cohypomonotonicity. Definition B.2 (Lipschitz continuity and cocoercivity). Let D ⊆ n be a nonempty subset of n. A single-valued operator A : D → n is said to be L-Lipschitz continuous if for any x, x′ ∈ D ∥Ax − Ax′∥ ≤ L∥x − x′∥, and β-cocoercive if ⟨x − x′, Ax − Ax′⟩ ≥ β∥Ax − Ax′∥2. Moreover, A is said to be nonexpansive if it is 1-Lipschitz continuous, and firmly nonexpansive if it is 1-cocoercive. A β-cocoercive operator is also β−1-Lipschitz continuity by direct implication of Cauchy-Schwarz. The resolvent operator JA = (id + A)−1 is firmly nonexpansive (with dom JA = n) if and only if A is (maximally) monotone. We will make heavy use of the Fenchel-Young inequality. For all a, b ∈ n and e > 0 we have, 2⟨a, b⟩ ≤ e∥a∥2 + 1e ∥b∥ 2 (B.1) ∥a + b∥2 ≤ (1 + e)∥a∥2 + (1 + 1e )∥b∥ 2 (B.2) −∥a − b∥2 ≤ − 11+e ∥a∥ 2 + 1e ∥b∥ 2 (B.3) C Proof for SEG+ Proof of Theorem 5.1. Following (Hsieh et al., 2020) closely, define the reference state ˜̄zk := zk − γFzk to be the exploration step using the deterministic operator and denote the second stepsize as ηk := αkγ. We will let ζ denote the additive noise term, i.e. F̂(z, ξ) := F(z) + ζ. Expanding the distance to solution, ∥zk+1 − z⋆∥2 = ∥zk − ηkF̂(z̄k, ξ̄k) − z⋆∥2 = ∥zk − z⋆∥2 − 2ηk⟨F̂(z̄k, ξ̄k), zk − z⋆⟩ + η2k∥F̂(z̄k, ξ̄k)∥2 = ∥zk − z⋆∥2 − 2ηk⟨F̂(z̄k, ξ̄k), ˜̄zk − z⋆⟩ − 2γηk⟨F̂(z̄k, ξ̄k), F(zk)⟩ + η2k∥F̂(z̄k, ξ̄k)∥2. (C.1) Recall that the operator is assumed to be linear Fz = Bz + v in which case we have, F̂(z̄k, ξ̄k) = Bz̄k + v + ζ̄k =B(zk − γF̂(zk, ξk)) + v + ζ̄k =B(zk − γBzk − γv − γζk) + v + ζ̄k =B(zk − γ(Bzk + v)) + v − γBζk + ζ̄k =F(˜̄zk) − γBζk + ζ̄k. (C.2) The two latter terms are zero in expectation due to the unbiasedness from Assumption II(ii), which lets us write the terms on the RHS of (C.1) as, −Ek⟨F̂(z̄k, ξ̄k), ˜̄zk − z⋆⟩ = −⟨F(˜̄zk), ˜̄zk − z⋆⟩ (C.3) −Ek⟨F̂(z̄k, ξ̄k), F(zk)⟩ = −⟨F(˜̄zk), F(zk)⟩ (C.4) Ek∥F̂(z̄k, ξ̄k)∥2 = ∥F(˜̄zk)∥2 + Ek∥γBζk∥2 + Ek∥ζ̄k∥2. (C.5) We can bound (C.3) directly through the weak MVI in Assumption I(iii) which might still be positive, −⟨F(˜̄zk), ˜̄zk − z⋆⟩ ≤ −ρ∥F(˜̄zk)∥2. (C.6) For the latter two terms of (C.5) we have Ek∥γBζk∥2 + Ek∥ζ̄k∥2 = γ2Ek∥F(ζk) − F(0)∥2 + Ek∥ζ̄k∥2 ≤ (γ2L2F + 1)σ2F , (C.7) where the last inequality follows from Lipschitz in Assumption I(i) and bounded variance in Assumption II(iii). Combining everything into (C.1) we are left with Ek∥zk+1 − z⋆∥2 ≤ ∥zk − z⋆∥2 + η2k(γ2L2F + 1)σ2F − 2γηk⟨F(˜̄zk), F(zk)⟩ + (η2k − 2ηkρ)∥F(˜̄zk)∥2 (C.8) By assuming the stepsize condition, ρ ≥ (ηk − γ)/2, we have η2k − 2ηkρ ≤ γηk. This allows us to complete the square, −2γηk⟨F(˜̄zk), F(zk)⟩ + (η2k − 2ηkρ)∥F(˜̄zk)∥2 ≤ −2γηk⟨F(˜̄zk), F(zk)⟩ + γηk∥F(˜̄zk)∥2 = γηk(∥F(zk) − F(˜̄zk)∥2 − ∥F(zk)∥2) ≤ γηk(γ2L2F − 1)∥F(zk)∥2, (C.9) where the last inequality follows from Lipschitzness of F and the definition of the update rule. Plugging into (C.8) we are left with Ek∥zk+1 − z⋆∥2 ≤ ∥zk − z⋆∥2 + η2k(γ2L2F + 1)σ2F − γηk(1 − γ2L2F)∥F(zk)∥2. (C.10) The result is obtained by total expectation and summing. D Proof for smooth unconstrained case Lemma D.1. Consider the recurrent relation Bk+1 = ξkBk + dk such that ξk > 0 for all k ≥ 0. Then Bk+1 = ( Πkp=0ξp )B0 + k∑ ℓ=0 dℓ Πℓp=0ξp . Assumption V. γ ∈ (⌊−2ρ⌋+, 1/LF) and for positive real valued b, µ B γ2(1 − γ2L2F(1 + b−1)) > 0. (D.1) Theorem D.2. Suppose that Assumptions I to III hold. Suppose in addition that Assumption V holds and that (αk)k∈ ⊂ (0, 1) is a diminishing sequence such that 2γLF̂ √ α0 + ( 1 + ( (b + 1)γ2L2F ) γ2L2 F̂ ) α0 ≤ 1 + 2ργ . (D.2) Consider the sequence (zk)k∈ generated by Algorithm 1. Then, the following estimate holds K∑ k=0 αk∑K j=0 α j E[∥F(zk)∥2] ≤ ∥z0 − z⋆∥2 + ηγ2∥F(z0)∥2 +Cσ2Fγ2 ∑K j=0 α 2 j µ ∑K j=0 α j , (D.3) where C = 1 + 2η ( (γ2L2 F̂ + 1) + 2α0 ) and η = 12 (b + 1)γ 2L2F + 1 γLF̂ √ α0 . Proof of Theorem D.2. The proof relies on establishing a (stochastic) descent property on the following potential function Uk+1 B ∥zk+1 − z⋆∥2 + Ak+1∥uk∥2 + Bk+1∥zk+1 − zk∥2. where uk B z̄k− zk+γF(zk) measures the difference of the bias-corrected step from the deterministic exploration step, and (Ak)k∈ , (Bk)k∈ are positive scalar parameters to be identified. We proceed to consider each term individually. Let us begin by quantifying how well z̄k estimates zk − γF(zk). uk = z̄k − zk + γF(zk) = γF(zk) − γF̂(zk, ξk) + (1 − αk)(z̄k−1 − zk−1 + γF̂(zk−1, ξk)). Therefore, ∥uk∥2 = ∥γF(zk) − γF̂(zk, ξk) + (1 − αk)(γF̂(zk−1, ξk) − γF(zk−1))∥2 + (1 − αk)2∥uk−1∥2 + 2(1 − αk)⟨z̄k−1 − zk−1 + γF(zk−1), γF(zk) − γF̂(zk, ξk) + (1 − αk)(γF̂(zk−1, ξk) − γF(zk−1))⟩. Conditioned on Fk, in the inner product the left term is known and the right term has an expectation that equals zero. Therefore, we obtain E[∥uk∥2 |Fk]=E[∥(1−αk) ( γF(zk)−γF̂(zk,ξk)+γF̂(zk−1,ξk)−γF(zk−1) ) +αk ( γF(zk)−γF̂(zk,ξk) ) ∥2 |Fk] +(1−αk)2∥uk−1∥2 ≤(1−αk)2∥uk−1∥2+2(1−αk)2γ2E[∥F̂(zk,ξk)−F̂(zk−1,ξk)∥2 |Fk] +2α2kγ 2E[∥F(zk)−F̂(zk,ξk)∥2 |Fk] ≤(1−αk)2∥uk−1∥2+2(1−αk)2γ2L2F̂∥z k−zk−1∥2+2α2kγ2σ2F (D.4) where in the first inequality we used Young inequality and the fact that the second moment is larger than the variance, and Assumptions II(iii) and III were used in the second inequality. By step 1.4, the equality ∥zk+1 − z⋆∥2 = ∥zk − z⋆∥2 − 2αkγ⟨F̂(z̄k, ξ̄k), zk − z⋆⟩ + α2kγ2∥F̂(z̄k, ξ̄k)∥2, (D.5) holds. The inner product in (D.5) can be upper bounded using Young inequalities with positive parameters εk, k ≥ 0, and b as follows. E[⟨−γF̂(z̄k, ξ̄k), zk − z⋆⟩ | F̄k] = − γ⟨F(z̄k), zk − z̄k⟩ − γ⟨F(z̄k), z̄k − z⋆⟩ = − γ2⟨F(z̄k), F(zk)⟩ + γ⟨F(z̄k), z̄k − zk + γF(zk)⟩ − γ⟨F(z̄k), z̄k − z⋆⟩ ≤ γ2 (1 2 ∥F(z̄k) − F(zk)∥2 − 1 2 ∥F(z̄k)∥2 − 1 2 ∥F(zk)∥2 ) + γ2εk 2 ∥F(z̄k)∥2 + 1 2εk ∥z̄k − zk + γF(zk)∥2 − γρ∥F(z̄k)∥2 ≤ γ2L2F 1 + b 2 ∥uk∥2 + 1 + b −1 2 γ4L2F∥F(zk)∥2 − γ2 2 ∥F(z̄k)∥2 − γ 2 2 ∥F(zk)∥2 + γ 2εk 2 ∥F(z̄k)∥2 + 1 2εk ∥uk∥2 − γρ∥F(z̄k)∥2 = ( γ2L2F 1 + b 2 + 1 2εk )∥uk∥2 + γ2(γ2L2F(1 + b−1) − 1) 2 ∥F(zk)∥2 + (γ2(εk − 1) 2 − γρ)∥F(z̄k)∥2. (D.6) Conditioning (D.6) with E [· | Fk] = E[E[· | F̄k] | Fk], since Fk ⊂ F̄k, yields 2E[⟨−γF̂(z̄k, ξ̄k), zk − z⋆⟩ | Fk] ≤ ( γ2L2F(1 + b) + 1 εk ) E[∥uk∥2 | Fk] − µ∥F(zk)∥2 + ( γ2(εk − 1) − 2γρ ) E [ ∥F(z̄k)∥2 | Fk ] , (D.7) where µ was defined in (D.1). The condition expectation of the third term in (D.5) is bounded through Assumption II(iii) by E [ ∥F̂(z̄k, ξ̄k)∥2 | Fk ] = E [ E[∥F̂(z̄k, ξ̄k)∥2 | F̄k] | Fk ] ≤ ∥F(z̄k)∥2 + σ2F , which in turn implies E [ ∥zk+1 − zk∥2 | Fk ] = α2kγ 2E [ ∥F̂(z̄k, ξ̄k)∥2 | Fk ] ≤ α2kγ2E [ ∥Fz̄k∥2 | Fk ] + α2kγ 2σ2F (D.8) Combining (D.7), (D.8), and (D.5) yields E[∥zk+1 − z⋆∥2 + Ak+1∥uk∥2 + Bk+1∥zk+1 − zk∥2 | Fk] ≤ ∥zk − z⋆∥2 + ( Ak+1 + αk ( γ2L2F(1 + b) + 1 εk )) E[∥uk∥2 | Fk] − αkµ∥F(zk)∥2 + ( αk ( γ2(εk − 1) − 2γρ ) + α2kγ 2 ) E [ ∥F(z̄k)∥2 | Fk ] + α2kγ 2σ2F + Bk+1α2kγ 2E [ ∥Fz̄k∥2 | Fk ] + Bk+1α2kγ 2σ2F . (D.9) Further using (D.4) and denoting Xk1 B αk ( γ2L2F(1 + b) + 1 εk ) + Ak+1, Xk2 B αk ( γ2(εk − 1) − 2ργ + αk γ2 ) leads to E[Uk+1 | Fk] −Uk ≤ − αkµ∥F(zk)∥2 + ( Xk1(1 − αk)2 − Ak ) ∥uk−1∥2 + ( 2Xk1(1 − αk)2γ2L2F̂ − Bk ) ∥zk − zk−1∥2 + ( Xk2 + Bk+1α 2 kγ 2 ) E [ ∥F(z̄k)∥2 | Fk ] + ( Bk+1α2k + α 2 k + 2X k 1α 2 k ) γ2σ2F . (D.10) Having established (D.10), set Ak = A, Bk = 2Aγ2L2F̂ , and εk = ε to obtain by the law of total expectation that E[Uk+1] − E[Uk] ≤ − αkµE [ ∥F(zk)∥2 ] + ( Xk1(1 − αk)2 − A ) E [ ∥uk−1∥2 ] + 2γ2L2 F̂ ( Xk1(1 − αk)2 − A ) E [ ∥zk − zk−1∥2 ] + ( Xk2 + 2Aγ 4L2 F̂ α2k ) E [ ∥F(z̄k)∥2 ] + ( 2Aγ2L2 F̂ + 1 + 2Xk1 ) α2kγ 2σ2F . (D.11) To get a recursion we require Xk1(1 − αk)2 − A ≤ 0 and Xk2 + 2Aγ4L2F̂α 2 k ≤ 0. (D.12) By developing the first requirement of (D.12) we have, 0 ≥ Xk1(1 − αk)2 − A = αk(1 − αk)2 ( γ2L2F(1 + b) + 1 ε ) + αk(αk − 2)A. (D.13) Equivalently, A needs to satisfy A ≥ (1 − αk) 2 2 − αk ( γ2L2F(1 + b) + 1 ε ) . (D.14) for any αk ∈ (0, 1). Since (1−αk) 2 2−αk ≤ 1 2 given αk ∈ (0, 1) it suffice to pick A = 12 ( (b + 1)γ2L2F + 1 ε ) . (D.15) For the second requirement of (D.12) note that we can equivalently require that the following quantity is negative 1 αkγ2 ( Xk2 + 2Aγ 4L2 F̂ α2k ) = ε − 1 − 2ρ γ + αk + 2Aγ2L2F̂αk ≤ ε − 1 − 2ρ γ + ( 1 + ( (b + 1)γ2L2F + 1 ε ) γ2L2 F̂ ) α0 where we have used that αk ≤ α0 and the choice of A from (D.15). Setting the Young parameter ε = γLF̂ √ α0 we obtain that Xk2 + 2Aγ 4L2 F̂ α2k ≤ 0 owing to (D.2). On the other hand, the last term in (D.11) may be upper bounded by 2Aγ2L2 F̂ + 1 + 2Xk1 = 1 + ( (b + 1)γ2L2F + 1 γLF̂ √ α0 )( (γ2L2 F̂ + 1) + 2αk ) ≤ 1 + ( (b + 1)γ2L2F + 1 γLF̂ √ α0 )( (γ2L2 F̂ + 1) + 2α0 ) = C. Thus, it follows from (D.11) that E[Uk+1] − E[Uk] ≤ − αkµE [ ∥F(zk)∥2 ] +Cα2kγ 2σ2F . Telescoping the above inequality completes the proof. Proof of Theorem 6.1. The theorem is obtained as a particular instantiation of Theorem D.2. The condition in (D.1) can be rewritten as b > γ 2L2F 1−γ2L2F . A reasonable choice is b = 2γ 2L2F 1−γ2L2F . Substituting back into µ we obtain µ = γ2(1 − γ2L2F(1 + 1−γ2L2F 2γ2L2F )) = γ 2(1−γ2L2F ) 2 > 0. (D.16) Similarly, the choice of b is substituted into η and (D.2) of Theorem D.2. The rate in (D.2) is further simplified by applying Lipschitz continuity of F from Assumption I(i) to ∥Fz0∥2 = ∥Fz0 − Fz⋆∥2. The proof is complete by observing that the guarantee on the weighted sum can be converted into an expectation over a sampled iterate in the style of Ghadimi & Lan (2013). Assumption VI (almost sure convergence). Let d ∈ [0, 1], b > 0. Suppose that the following holds (i) the diminishing sequence (αk
1. What is the focus and contribution of the paper regarding weak minty variational inequality? 2. What are the strengths of the proposed algorithm, particularly in its design and modification of stochastic extra-gradient? 3. Do you have any concerns or questions about the lower bound mentioned in the paper, the convergence measurement in the theorem, and the experiment in Figure 1? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper provides a stochastic algorithm for a class of problem characterized by weak minty variational inequality. The algorithm modifies stochastic extra-gradient by adding a bias-correction term in the exploration step. Strengths And Weaknesses Strength: (1) The design of the algorithm is interesting, as it only uses diminishing stepsize for in one step of EG update and introduces a novel correction term. I have the following questions: (1) The paper mentions the lwoer bound ρ > γ / 2 from [Pethick et al., 2022] several times and it serves an intuition for the algorithm in page 4. However, based on my understanding, from the example given in [Pethick et al., 2022], the stepsize γ is fixed to be 1 / L . I am not sure whether that still holds for other stepsize or time-changing stepsize. (2) In Theorem 7, the convergence measurement | | H z ¯ k ⋆ − H z k ⋆ | | seems to only consider the operator F and ignore A by the definition of H . Why is it a good measurement here? Also in Algorithm 2, it returns z k + 1 , but I do not know how it will guarantee that it will satisfy the constraint if operator A corresponds to a constraint. (3) The experiment in Figure 1 is not representative. It is a bilinear game, so it can be easily solved by stochastic EG. Clarity, Quality, Novelty And Reproducibility Novelty: the algorithm is original. Clarity: the paper is not hard to follow, but can be improved.
ICLR
Title Variational Dynamic Mixtures Abstract Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Modeaveraging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains. 1 INTRODUCTION Making sense of time series data is an important challenge in various domains, including ML for climate change. One important milestone to reach the climate goals is to significantly reduce the CO2 emissions from mobility (Rogelj et al., 2016). Accurate forecasting models of typical driving behavior and of typical pollution levels over time can help both lawmakers and automotive engineers to develop solutions for cleaner mobility. In these applications, no accurate physical model of the entire dynamic system is known or available. Instead, data-driven models, specifically deep probabilistic time series models, can be used to solve the necessary tasks including forecasting. The dynamics in such data can be highly multi-modal. At any given part of the observed sequence, there might be multiple distinct continuations of the data that are plausible, but the average of these behaviors is unlikely, or even physically impossible. Consider for example a dataset of taxi trajectories1. In each row of Fig. 1a, we have selected 50 routes from the dataset with similar starting behavior (blue). Even though these routes are quite similar to each other in the first 10 way points, the continuations of the trajectories (red) can exhibit quite distinct behaviors and lead to points on any far edge of the map. The trajectories follow a few main traffic arteries, these could be considered the main modes of the data distribution. We would like to learn a generative model of the data, that based on some initial way points, can forecast plausible continuations for the trajectories. Many existing methods make restricting modeling assumptions such as Gaussianity to make learning tractable and efficient. But trying to capture the dynamics through unimodal distributions can lead either to “over-generalization”, (i.e. putting probability mass in spurious regions) or on focusing only on the dominant mode and thereby neglecting important structure of the data. Even neural approaches, with very flexible generative models can fail to fully capture this multi-modality because their capacity is often limited through the assumptions of their inference model. To address this, we develop variational dynamic mixtures (VDM). Its generative process is a sequential latent variable model. The main novelty is a new multi-modal variational family which makes learning and inference multi-modal yet tractable. In summary, our contributions are • A new inference model. We establish a new type of variational family for variational inference of sequential latent variables. By successively marginalizing over previous latent states, the procedure can be efficiently carried-out in a single forward pass and induces a multi-modal posterior 1https://www.kaggle.com/crailtap/taxi-trajectory approximation. We can see in Fig. 1b, that VDM trained on a dataset of taxi trajectories produces forecasts with the desired multi-modality while other methods overgeneralize. • An evaluation metric for multi-modal tasks. The negative log-likelihood measures predictive accuracy but neglects an important aspect of multi-modal forecasts – sample diversity. In Section 4, we derive a score based on the Wasserstein distance (Villani, 2008) which evaluates both sample quality and diversity. This metric complements our evaluation based on log-likelihoods. • An extensive empirical study. in Section 4, we use VDM to study various datasets, including a synthetic data with four modes, a stochastic Lorenz attractor, the taxi trajectories, and a U.S. pollution dataset with the measurements of various pollutants over time. We illustrate VDM’s ability in modeling multi-modal dynamics, and provide quantitative comparisons to other methods showing that VDM compares favorably to previous work. 2 RELATED WORK Neural recurrent models. Recurrent neural networks (RNNs) such as LSTMs (Hochreiter & Schmidhuber, 1997) and GRUs (Chung et al., 2014) have proven successful on many time series modeling tasks. However, as deterministic models they cannot capture uncertainties in their dynamic predictions. Stochastic RNNs make these sequence models non-deterministic (Chung et al., 2015; Fraccaro et al., 2016; Gemici et al., 2017; Li & Mandt, 2018). For example, the variational recurrent neural network (VRNN) (Chung et al., 2015) enables multiple stochastic forecasts due to its stochastic transition dynamics. An extension of VRNN (Goyal et al., 2017) uses an auxiliary cost to alleviate the KL-vanishing problem. It improves on VRNN inference by forcing the latent variables to also be predictive of future observations. Another line of related methods rely on particle filtering (Naesseth et al., 2018; Le et al., 2018; Hirt & Dellaportas, 2019) and in particular sequential Monte Carlo (SMC) to improve the evidence lower bound. In contrast, VDM adopts an explicitly multi-modal posterior approximation. Another SMC-based work (Saeedi et al., 2017) employs search-based techniques for multi-modality but is limited to models with finite discrete states. Recent works (Schmidt & Hofmann, 2018; Schmidt et al., 2019; Ziegler & Rush, 2019) use normalizing flows in the latent space to model the transition dynamics. A normalizing flow requires many layers to transform its base distribution into a truly multi-modal distribution in practice. In contrast, mixture density networks (as used by VDM) achieve multi-modality by mixing only one layer of neural networks. A task orthogonal to multi-modal inference is learning disentangled representations. Here too, mixture models are used (Chen et al., 2016; Li et al., 2017). These papers use discrete variables and a mutual information based term to disentangle different aspects of the data. VAE-like models (Bhattacharyya et al., 2018; 2019) and GAN-like models (Sadeghian et al., 2019; Kosaraju et al., 2019) only have global, time independent latent variables. Yet, they show good results on various tasks, including forecasting. With a deterministic decoder, these models focus on average dynamics and don’t capture local details (including multi-modal transitions) very well. Sequential latent variable models are described next. Deep state-space models. Classical State-space models (SSMs) are popular due to their tractable inference and interpretable predictions. Similarly, deep SSMs with locally linear transition dynamics enjoy tractable inference (Karl et al., 2017; Fraccaro et al., 2017; Rangapuram et al., 2018; Becker et al., 2019). However, these models are often not expressive enough to capture complex (or highly multi-modal) dynamics. Nonlinear deep SSMs (Krishnan et al., 2017; Zheng et al., 2017; Doerr et al., 2018; De Brouwer et al., 2019; Gedon et al., 2020) are more flexible. Their inference is often no longer tractable and requires variational approximations. Unfortunately, in order for the inference model to be tractable, the variational approximations are often simplistic and don’t approximate multi-modal posteriors well with negative effects on the trained models. Multi-modality can be incorporated via additional discrete switching latent variables, such as recurrent switching linear dynamical systems (Linderman et al., 2017; Nassar et al., 2018; Becker-Ehmck et al., 2019). However, these discrete states make inference more involved. 3 VARIATIONAL DYNAMIC MIXTURES We develop VDM, a new sequential latent variable model for multi-modal dynamics. Given sequential observations x1:T = (x1, . . . ,xT ), VDM assumes that the underlying dynamics are governed by latent states z1:T = (z1, . . . , zT ). We first present the generative process and the multi-modal inference model of VDM. We then derive a new variational objective that encourages multi-modal posterior approximations and we explain how it is regularized via hybrid-training. Finally, we introduce a new sampling method used in the inference procedure. Generative model. The generative process consists of a transition model and an emission model. The transition model p(zt | z<t) describes the temporal evolution of the latent states and the emission model p(xt | z≤t) maps the states to observations. We assume they are parameterized by two separate neural networks, the transition network φtra and the emission network φdec.To give the model the capacity to capture longer range temporal correlations we parametrize the transition model with a recurrent architecture φGRU (Auger-Méthé et al., 2016; Zheng et al., 2017) such as a GRU (Chung et al., 2014). The latent states zt are sampled recursively from zt | z<t ∼ N (µ0,t, σ20,tI), where [µ0,t, σ20,t] = φtra(ht−1), ht−1 = φGRU(zt−1,ht−2), (1) and are then decoded such that the observations can be sampled from the emission model, xt | z≤t ∼ N (µx,t, σ2x,tI), where [µx,t, σ2x,t] = φdec(zt,ht−1). (2) This generative process is similar to (Chung et al., 2015), though we did not incorporate autoregressive feedback due to its negative impact on long-term generation (Ranzato et al., 2016; Lamb et al., 2016). The competitive advantage of VDM comes from a more expressive inference model. Inference model. VDM is based on a new procedure for multi-modal inference. The main idea is that to approximate the posterior at time t, we can use the posterior approximation of the previous time step and exploit the generative model’s transition model φGRU. This leads to a sequential inference procedure. We first use the forward model to transform the approximate posterior at time t − 1 into a distribution at time t. In a second step, we use samples from the resulting transformed distribution and combine each sample with data evidence xt, where every sample parameterizes a Gaussian mixture component. As a result, we obtain a multi-modal posterior distribution that depends on data evidence, but also on the previous time step’s posterior. In more detail, for every zt, we define its corresponding recurrent state as the transformed random variable st = φGRU(zt,ht−1), using a deterministic hidden state ht−1 = E [st−1]. The variational family of VDM is defined as follows: q(z1:T | x1:T ) = T∏ t=1 q(zt | x≤t) = T∏ t=1 ∫ q(zt | st−1,xt)q(st−1 | x≤t)dst−1. (3) Chung et al. (2015) also use a sequential inference procedure, but without considering the distribution of st. Only a single sample is propagated through the recurrent network and all other information about the distribution of previous latent states z<t is lost. In contrast, VDM explicitly maintains st as part of the inference model. Through marginalization, the entire distribution is taken into account for inferring the next state zt. Beyond the factorization assumption and the marginal consistency constraint of Eq. (3), the variational family of VDM needs two more choices to be fully specified; First, one has to choose the parametrizations of q(zt | st−1,xt) and q(st−1 | x≤t) and second, one has to choose a sampling method to approximate the marginalization in Eq. (3). These choices determine the resulting factors q(zt | x≤t) of the variational family. We assume that the variational distribution of the recurrent state factorizes as q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t), i.e. it is the distribution of the recurrent state given the past observation2, re-weighted by a weighting function ω(st−1,xt) which involves only the current observations. For VDM, we only need samples from q̃(st−1 | x<t), which are obtained by sampling from the previous posterior approximation q(zt−1 | x<t) and transforming the sample with the RNN, s (i) t−1 ∼ q̃(st−1 | x<t) equiv. to s (i) t−1 = φ GRU(z (i) t−1,ht−2), z (i) t−1 ∼ q(zt−1 | x<t), (4) where i indexes the samples. The RNN φGRU has the same parameters as in the generative model. Augmenting the variational model with the recurrent state has another advantage; approximating the marginalization in Eq. (3) with k samples from q(st−1 | x≤t) and choosing a Gaussian parametrization for q(zt | st−1,xt) results in a q-distribution q(zt | x≤t) that resembles a mixture density network (Bishop, 2006), which is a convenient choice to model multi-modal distributions. q(zt | x≤t) = k∑ i ω (i) t N (µ (i) z,t, σ (i)2 z,t I), [µ (i) z,t, σ (i)2 z,t ] = φ inf (s (i) t−1,xt). (5) We assume q(zt | st−1,xt) to be Gaussian and use an inference network φinf to model the effect of the observation xt and recurrent state st−1 on the mean and variance of the mixture components. The mixture weights ω(i)t := ω(s (i) t−1,xt)/k come from the variational distribution q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t) and importance sampling3. We are free to choose how to parametrize the weights, as long as all variational distributions are properly normalized. Setting ω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)), (6) achieves this. In Appendix A, we explain this choice with importance sampling and in Appendix H, we compare the performance of VDM under alternative variational choices for the weights. In the next time-step, plugging the variational distribution q(zt | x≤t) into Eq. (4) yields the next distribution over recurrent states q̃(st | x≤t). For this, the expected recurrent state ht−1 is required. 2q̃(st−1 | x<t) is the distribution obtained by transforming the previous zt−1 ∼ q(zt−1|x<t) through the RNN. It can be expressed analytically using the Kronecker δ to compare whether the stochastic variable st−1 equals the output of the RNN: q̃(st−1 | x<t) ∝ ∫ δ(st−1 − φGRU(zt−1,ht−2))q(zt−1 | xt−1, λt−1)dzt−1. 3the ω adjusts for using samples from q̃(st−1 | x<t) when marginalizing over ω(st−1,xt)q̃(st−1 | x<t) We approximate the update using the same k samples (and therefore the same weights) as in Eq. (5). ht−1 = E[st−1] = ∫ st−1 q(st−1 | x≤t)dst−1 ≈ k∑ i ω (i) t s (i) t−1. (7) A schematic view of the generative and inference model of VDM is shown in Fig. 2. In summary, the inference model of VDM alternates between Eqs. (4) to (7). Latent states are sampled from the posterior approximation of the previous time-step and transformed by Eq. (4) into samples of the recurrent state of the RNN. These are then combined with the new observation xt to produce the next variational posterior Eq. (5) and the expected recurrent state is updated (Eq. (7)). These are then used in Eq. (4) again. Approximating the marginalization in Eq. (3) with a single sample, recovers the inference model of VRNN (Chung et al., 2015), and fails in modeling multi-modal dynamics as shown in Fig. 3. In comparison, VDM’s approximate marginalization over the recurrent states with multiple samples succeeds in modeling multi-modal dynamics. Variational objective. We develop an objective to optimize the variational parameters of VDM φ = [φtra, φdec, φGRU, φinf ]. The evidence lower bound (ELBO) at each time step is LELBO(x≤t, φ) := 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(xt | zt,ht−1 = s(i)t−1) ] + 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(zt | ht−1 = s(i)t−1) q(zt | s(i)t−1,xt) ] − 1 k k∑ i ω(s (i) t−1,xt) [ logω(s (i) t−1,xt) +C ] (8) Claim 1. The ELBO in Eq. (8) is a lower bound on the log evidence log p(xt | x<t), log p(xt | x<t) ≥ LELBO(x≤t, φ), (see proof in Appendix B) . (9) In addition to the ELBO, the objective of VDM has two regularization terms, LVDM(φ) = T∑ t=1 Epdata [−LELBO(x≤t, φ)− ω1Lpred(x≤t, φ)] + ω2Ladv(x≤t, φ) . (10) In an ablation study in Appendix E, we compare the effect of including and excluding the regularization terms in the objective. VDM is competitive without these terms, but we got the strongest results by setting ω1,2 = 1 (this is the only nonzero value we tried. This hyperparameter could be tuned even further.) The first regularization term Lpred, encourages the variational posterior (from the previous time step) to produce samples that maximize the predictive likelihood, Lpred(x≤t, φ) = logEq(st−1|x<t) [p(xt | st−1,x<t)] ≈ log 1 k k∑ i p(xt | s(i)t−1) . (11) This regularization term is helpful to improve the prediction performance, since it depends on the predictive likelihood of samples, which isn’t involved in the ELBO. The second optional regularization term Ladv (Eq. (12)) is based on ideas from hybrid adversarial-likelihood training (Grover et al., 2018; Lucas et al., 2019). These training strategies have been developed for generative models of images to generate sharper samples while avoiding “mode collapse”. We adapt these ideas to generative models of dynamics. The adversarial term Ladv uses a forward KL-divergence, which enables “quality-driven training” to discourage probability mass in spurious areas. Ladv(x≤t, φ) = DKL(p(xt | x<t)‖pD(xt | x<t)) = E [log p(xt | x<t)− log pD(xt | x<t)] (12) The expectation is taken w.r.t. p(xt | x<t). The true predictive distribution pD(xt | x<t) is unknown. Instead, we can train the generator of a conditional GAN (Mirza & Osindero, 2014), while assuming an optimal discriminator. As a result, we optimize Eq. (12) in an adversarial manner, conditioning on x<t at each time step. Details about the discriminator are in Appendix G. Stochastic cubature approximation (SCA). The variational family of VDM is defined by a number of modeling choices, including the factorization and marginal consistency assumptions of Eq. (3), the parametrization of the transition and inference networks Eqs. (4) and (5), and the choice of weighting function ω(·). It is also sensitive to the choice of sampling method which we discuss here. In principle, we could use Monte-Carlo methods. However, for a relatively small number of samples k, Monte-Carlo methods don’t have a mechanism to control the quality of samples. We instead develop a semi-stochastic approach based on the cubature approximation (Wan & Van Der Merwe, 2000; Wu et al., 2006; Arasaratnam & Haykin, 2009), which chooses samples more carefully. The cubature approximation proceeds by constructing k = 2d+1 so-called sigma points, which are optimally spread out on the d-dimensional Gaussian with the same mean and covariance as the distribution we need samples from. In SCA, the deterministic sigma points are infused with Gaussian noise to obtain stochastic sigma variables. A detailed derivation of SCA is in Appendix D. We use SCA for various reasons: First, it typically requires fewer samples than Monte-Carlo methods because the sigma points are carefully chosen to capture the first two moments of the underlying distribution. Second, it ensures a persistence of the mixture components; when we resample, we sample another nearby point from the mixture component and not an entirely new location. 4 EVALUATION AND EXPERIMENTS In this empirical study, we evaluate VDM’s ability to model multi-modal dynamics and show its competitive forecasting performance in various domains. We first introduce the evaluation metrics and baselines. Experiments on synthetic data demonstrate that VDM is truly multi-modal thereby supporting the modeling choices of Section 3, especially for the inference model. Then, experiments on real-world datasets with challenging multi-modal dynamics show the benefit of VDM over stateof-the art (deep) probabilistic time-series models. Evaluation metrics. In the experiments, we always create a training set, a validation set, and a test set. During validation and test, each trajectory is split into two parts; initial observations (given to the models for inference) and continuations of the trajectories (to be predicted and not accessible to the models). The inference models are used to process the initial observations and to infer latent states. These are then processed by the generative models to produce forecasts. We use 3 criteria to evaluate these forecasts (i) multi-steps ahead prediction p(xt+1:t+τ | x1:t), (ii) one-step-ahead prediction p(xt+1 | x1:t), and (iii) empirical Wasserstein distance. As in other work (Lee et al., 2017; Bhattacharyya et al., 2018; 2019), (i) and (ii) are reported in terms of negative log-likelihood. While the predictive distribution for one-step-ahead prediction is in closed-form, the long-term forecasts have to be computed using samples. For each ground truth trajectory x we generate n = 1000 forecasts x̂i given initial observations from the beginning of the trajectory NLL = − log ( 1 n n∑ i 1√ 2π exp ( − (x̂i − x) 2 2 )) , (13) This evaluates the predictive accuracy but neglects a key aspect of multi-modal forecasts – diversity. We propose a new evaluation metric, which takes both diversity and accuracy of predictions into account. It relies on computing the Wasserstein distance between two empirical distributions P , Q W (P,Q) = inf π ( 1 n n∑ i ‖(xi − yπ(i)‖2 ) , (14) where x and y are the discrete samples of P and Q, and π denotes all permutations (Villani, 2008). To use this as an evaluation measure for multi-modal forecasts, we do the following. We select n samples from the test set with similar initial observations. If the dynamics in the data are multimodal the continuations of those n trajectories will be diverse and this should be reflected in the forecasts. For each of the n samples, the model generates 10 forecasts and we get n groups of samples. With Eq. (14) the empirical W-distance between the n true samples, and each group of generated samples can be calculated. The averaged empirical W-distance over groups evaluates how well the generated samples match the ground truth. Repeating this procedure with different initial trajectories evaluates the distance between the modeled distribution and the data distribution. Baselines. We choose baselines from three classes of models. Two stochastic recurrent models are variational recurrent neural network (VRNN) (Chung et al., 2015) and auto-encoding sequential Monte Carlo (AESMC) (Le et al., 2018). VRNN has a similar but more powerful generative model than VDM, and AESMC uses SMC to achieve a tighter lower bound. But compared to VDM, both methods have a less powerful inference model which limits their capacity to capture multi-modal distributions. The third baseline is a deep SSM. The recurrent Kalman network (RKN) (Becker et al., 2019) models the latent space with a locally linear SSMs, which makes the prediction step and update step analytic (as for Kalman filters (Kalman, 1960)). A final baseline is the conditional flow variational autoencoder (CF-VAE) (Bhattacharyya et al., 2019), which uses conditional normalizing flows to model a global prior for the future continuations and achieves state-of-the-art performances. To investigate the necessity of taking multiple samples in the VDM inference model, we also compared to VDM(k = 1) which uses only a single sample in Eq. (5). VDM(k = 1) has a simpler generative model than VRNN (it considers no autoregressive feedback of the observations x), but the same inference model. More ablations for the modeling choices of VDM are in Appendix H. For fair comparison, we fix the dimension of the latent variables zt and ht to be the same for VDM, AESMC, and VRNN which have the same resulting model size (except for the additional autoregressive feedback in VRNN). AESMC and VDM always use the same number of particles/samples. RKN does not have recurrent states, so we choose a higher latent dimension to make model size comparable. In contrast, CF-VAE has only one global latent variable which needs more capacity and we make it higher-dimensional than zt. Details for each experiment are in Appendix G. Synthetic data with multi-modal dynamics. We generate synthetic data with two dimensions and four modes and compare the performance of VDM with 9 samples (Fig. 3, left), VDM with a single sample (Fig. 3, middle), and AESMC using 9 particles (Fig. 3, right). Since variational inference is known to try to match the aggregated posterior with the predictive prior (Tomczak & Welling, 2018), it is instructive to fit all three models and to look at their predictive prior p(z2|x≤1) and the aggregated posterior p(z2|D). Because of the multi-modal nature of the problem, all 3 aggregated posteriors are multi-modal, but only VDM(k = 9) learns a multi-modal predictive prior (thanks to its choice of variational family). Although AESMC achieves a good match between the prior and the aggregated posterior, the predictive prior does not clearly separate into different modes. In contrast, the inference model of VDM successfully uses the weights (Eq. (6)), which contain information about the incoming observation, to separate the latent states into separate modes. Stochastic Lorenz attractor. The Lorenz attractor is a system governed by ordinary differential equations. We add noise to the transition and emission function to make it stochastic (details in Appendix F.1). Under certain parameter settings it is chaotic – even small errors can cause considerable differences in the future. This makes forecasting its dynamics very challenging. All models are trained and then tasked to predict 90 future observations given 10 initial observations. Fig. 4 illustrates qualitatively that VDM (Fig. 4b) and AESMC (Fig. 4c) succeed in modeling the chaotic dynamics of the stochastic Lorenz attractor, while CF-VAE (Fig. 4d) and VRNN (Fig. 4e) miss local details, and RKN (Fig. 4f) which lacks the capacity for stochastic transitions does not work at all. VDM achieves the best scores on all metrics (Table 1). Since the dynamics of the Lorenz attractor are governed by ordinary differential equations, the transition dynamics at each time step are not obviously multi-modal, which explains why all models with stochastic transitions do reasonably well. Next, we will show the advantages of VDM on real-world data with multi-modal dynamics. Taxi trajectories. The taxi trajectory dataset involves taxi trajectories with variable lengths in Porto, Portugal. Each trajectory is a sequence of two dimensional locations over time. Here, we cut the trajectories to a fixed length of 30 to simplify the comparison (details in Appendix F.2). The task is to predict the next 20 observations given 10 initial observations. Ideally, the forecasts should follow the street map (though the map is not accessible to the models). The results in Table 2 show that VDM outperforms the other sequential latent variable models in all evaluations. However, it turns out that for multi-step forecasting learning global structure is advantageous, and CF-VAE which is a global latent variable model, achieves the highest results. However, this value doesn’t match the qualitative results in Fig. 1. Since CF-VAE has to encode the entire structure of the trajectory forcast into a single latent variable, its predictions seem to average over plausible continuations but are locally neither plausible nor accurate. In comparison, VDM and the other models involve a sequence of latent variables. As the forecasting progresses, the methods update their distribution over latest states, and the impact of the initial observations becomes weaker and weaker. As a result, local structure is captured more accurately. While the forecasts are plausible and can be highly diverse, they potentially evolve into other directions than the ground truth. For this reason, their multi-step prediction results are worse in terms of log-likelihood. That’s why the empirical W-distance is useful to complement the evaluation of multi-modal tasks. It reflects that the forecasts of VDM are diverse and plausible. Additionally, we illustrate the predictive prior p(zt|x<t) at different time steps in Fig. 5. VDM(k = 13) learns a multi-modal predictive prior, which VDM(k = 1) and AESMC approximate it with an uni-modal Gaussian. U.S. pollution data. In this experiment, we study VDM on the U.S. pollution dataset (details in Appendix F.3). The data is collected from counties in different states from 2000 to 2016. Each observation has 12 dimensions (mean, max value, and air quality index of NO2, O3, SO2, and O3). The goal is to predict monthly pollution values for the coming 18 months, given observations of the previous six months. We ignore the geographical location and time information to treat the development tendency of pollution in different counties and different times as i.i.d.. The unknown context information makes the dynamics multi-modal and challenging to predict accurately. Due to the small size and high dimensionality of the dataset, there are not enough samples with very similar initial observations. Thus, we cannot evaluate empirical W-distance in this experiment. In multi-step predictions and one-step predictions, VDM outperforms the other methods. NBA SportVu data. This dataset4 of sequences of 2D coordinates describes the movements of basketball players and the ball. We extract the trajectories and cut them to a fixed length of 30 to simplify the comparisons (details in Appendix F.4). The task is to predict the next 20 observations given 10 initial observations. Players can move anywhere on the court and hence their movement is less structured than the taxi trajectories which are constrained by the underlying street map. Due to this, the initial movement patters are not similar enough to each other to evaluate empirical Wdistance. In multi-step and one-step predictions, VDM outperforms the other baselines (Table 4). Fig. 6 illustrates qualitatively that VDM (Fig. 6b) and CF-VAE (Fig. 6d) succeed in capturing the multi-modal dynamics. The forecasts of AESMC (Fig. 6c) are less plausible (not as smooth as data), and VRNN (Fig. 6e) and RKN (Fig. 6f) fail in capturing the multi-modality. 5 CONCLUSION We have presented variational dynamic mixtures (VDM), a sequential latent variable model for multi-modal dynamics. The main contribution is a new variational family. It propagates multiple samples through an RNN to parametrize the posterior approximation with a mixture density network. Additionally, we have introduced the empirical Wasserstein distance for the evaluation of multimodal forecasting tasks, since it accounts for forecast accuracy and diversity. VDM succeeds in learning challenging multi-modal dynamics and outperforms existing work in various applications. 4A version of the dataset is available at https://www.stats.com/data-science/ A SUPPLEMENTARY TO WEIGHTING FUNCTION In this Appendix we give intuition for our choice of weighting function Eq. (6). Since we approximate the integrals in Eqs. (3) and (7) with samples from q̃(st−1 | x<t) 5 instead of samples from q(st−1 | x≤t), importance sampling tells us that the weigths should be ω(st−1,xt) = q(st−1 | x≤t) q̃(st−1 | x<t) = q(xt | st−1,x<t) q(xt | x<t) q̃(st−1 | x<t) q̃(st−1 | x<t) = q(xt | st−1,x<t) q(xt | x<t) ∝ q(xt | st−1,x<t) (15) This is consistent with out earlier definition of q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t). The weights are proportional to the likelihood of the variational model q(xt | st−1,x<t). We choose to parametrize it using the likelihood of the generative model p(xt | ht−1 = st−1) and get ω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)). (16) With this choice of the weighting function, only the mixture component with the highest likelihood is selected to be in charge of modeling the current observation xt. As a result, other mixture components have the capacity to focus on different modes. This helps avoid the effect of mode-averaging. An alternative weight function is given in Appendix H. B SUPPLEMENTARY TO LOWER BOUND Claim. The ELBO in Eq. (8) is a lower bound on the log evidence log p(xt | x<t), log p(xt | x<t) ≥ LELBO(x≤t, φ) . (17) Proof. We write the data evidence as the double integral over the latent variables zt, and z<t. log p(xt | x<t) = log ∫∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)p(z<t | x<t)dztdz<t (18) We multiply the posterior at the previous time step p(z<t | x<t) with the ratio of the approximated posterior q(z<t|x<t)q(z<t|x<t) and the ratio f(a,b) f(a,b) , where f is any suitable function of two variables a and b. The following equality holds, since the ratios equal to one. log p(xt | x<t) = log ∫ f(a,b) f(a,b) q(z<t | x<t) q(z<t | x<t) p(z<t | x<t) ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dztdz<t (19) We move the integral over z<t with respect to f(a,b)q(z<t | x<t) out of the log operation with applying the Jensen’s inequality. log p(xt | x<t) ≥ Ef(a,b)q(z<t|x<t) [ log ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dzt ] (20) − Ef(a,b)q(z<t|x<t) [ log f(a,b) + log q(z<t | x<t) p(z<t | x<t) ] We introduce the variational posterior q(zt | z<t,x≤t), and apply Jensen’s inequality to replace the intractable integral log ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dzt with its lower bound. log p(xt | x<t) ≥ Ef(a,b)q(z<t|x<t) [ Eq(zt|z<t,x≤t) [ log p(xt | z≤t,x<t)p(zt | z<t,x<t) q(zt | z<t,x≤t) ]] − Ef(a,b)q(z<t|x<t) [ log f(a,b) + log q(z<t | x<t) p(z<t | x<t) ] . (21) 5The ∼ just helps to visually distinguish the two distributions that appear in the main text. The expectation with respect to f(a,b)q(z<t | x<t) is approximated with samples. Instead of resampling the entire history, samples from previous time steps are reused (they have been aggregated by the RNN) and we sample according to Eq. (4). We plugg in the weighting function ω(s(i)t−1,xt) for f(a,b). The term log q(z<t|x<t)p(z<t|x<t) is not affected by the incoming observation xt and can be treated as a constant. In this step, we plug in our generative model and inference model as they are described in the main text for p and q. The conditional independence assumptions can be read of Fig. 2. In the generative model ht−1 and in the inference model st−1 summarize the dependencies of zt on the previous latent variables z<t and observations x<t. In other words, we assume zt is conditionally independent on z<t and x<t given s (i) t−1 in the inference model (or given ht−1 in the generative model). log p(xt | x<t) ≥ 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(xt | zt,ht−1 = s(i)t−1) ] + 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(zt | ht−1 = s(i)t−1) q(zt | s(i)t−1,xt) ] − 1 k k∑ i ω(s (i) t−1,xt) [ logω(s (i) t−1,xt) +C ] (22) C ALGORITHMS OF GENERATIVE MODEL AND INFERENCE MODEL Algorithm 1 Generative model Inputs: [µz,τ , σ2z,τ ],hτ−1 Outputs: xτ+1:T zτ ∼ N (µz,τ , σ2z,τ I) hτ = φ GRU(zτ ,hτ−1) for t = τ + 1 : T do [µ0,t, σ 2 0,t] = φ tra(ht−1) zt ∼ N (µ0,t, σ20,tI) ht = φ GRU(zt,ht−1) [µx,t, σ 2 x,t] = φ dec(zt,ht−1) xt ∼ N (µx,t, σ2x,tI) end for Algorithm 2 Inference model Inputs: x1:τ ,h0 Outputs: [µz,1:τ , σ2z,1:τ ],hτ−1 [µz,1, σ 2 z,1] = φ inf (h0,x1) for t = 2 : τ do z (i) t−1 ∼ N (µz,t−1, σ2z,t−1I) s (i) t−1 = φ GRU(z (i) t−1,ht−2) [µ (i) z,t, σ (i)2 z,t ] = φ inf (s (i) t−1,xt) ω (i) t := 1(i = argmaxj p(xt | ht−1 = s (j) t−1)) [µz,t, σ 2 z,t] = ∑k i ω (i) t N (µ (i) z,t, σ (i)2 z,t I) ht−1 ≈ ∑k i ω (i) t s (i) t−1 end for D SUPPLEMENTARY TO STOCHASTIC CUBATURE APPROXIMATION Cubature approximation. The cubature approximation is widely used in the engineering community as a deterministic method to numerically integrate a nonlinear function f(·) of Gaussian random variable z ∼ N (µz, σ2zI), with z ∈ Rd. The method proceeds by constructing 2d+1 sigma points z(i) = µz+σzξ(i). The cubature approximation is simply a weighted sum of the sigma points propagated through the nonlinear function f(·), ∫ f(z)N (z | µz, σ2zI)dz ≈ 2d+1∑ i=1 γ(i)f(z(i)) . (23) Simple analytic formulas determine the computation of weights γ(i) and the locations of ξ(i). γ(i) = { 1 2(n+κ) , i = 1, ..., 2n κ n+κ , i = 0 ξ(i) = √ n+ κei , i = 1, ..., n − √ n+ κei−n , i = n+ 1, ..., 2n 0 , i = 0 , (24) where κ is a hyperparameter controlling the spread of the sigma points in the n-dimensional sphere. Further ei represents a basis in the n-dimensional space, which is choosen to be a unit vector in cartesian space, e.g. e1 = [1, 0, ..., 0]. Stochastic cubature approximation. In SCA, we adopt the computation of ξ(i) in Eq. (24), and infuse the sigma points with standard Gaussian noise ∼ N (0, I) to obtain stochastic sigma variables s(i) = µz + σz(ξ(i) + ). We choose κ = 0.5 to set the weights γ(i) equally. E SUPPLEMENTARY TO ABLATION STUDY OF REGULARIZATION TERMS We investigate the effect of the regularization terms using the synthetic data from Fig. 3. We can see in Table 5, VDM(k = 9) can be trained successfully withLELBO only, and both regularization terms improve the performance (negative log-likelihood of multi-steps ahead prediction), while VDM(k = 1) doesn’t work whatever the regularization terms. Additionally, we tried to train the model only with the regularization terms (each separate or together) but these options diverged during training. F SUPPLEMENTARY TO EXPERIMENTS SETUP F.1 STOCHASTIC LORENZ ATTRACTOR SETUP Lorenz attractor is a system of three ordinary differential equations: dx dt = σ(y − x), dy dt = x(ρ− z)− y, dz dt = xy − βz , (25) where σ, ρ, and β are system parameters. We set σ = 10, ρ = 28 and β = 8/3 to make the system chaotic. We simulate the trajectories by RK4 with a step size of 0.01. To make it stochastic, we add process noise to the transition, which is a mixture of two Gaussians 0.5N (m0,P) + 0.5N (m2,P), where m0 = [ 0 1 0 ] , m1 = [ 0 −1 0 ] , P = [ 0.06 0.03 0.01 0.03 0.03 0.03 0.01 0.03 0.05 ] . (26) Besides, we add a Gaussian noise with zero mean and diagonal standard deviation [0.6, 0.4, 0.8] as the observation noise. Totally, we simulate 5000 sequences as training set, 200 sequences as validation set, and 800 sequences as test set. For evaluation of Wasserstein distance, we simulate 10 groups of sequences additionally. Each group has 100 sequences with similar initial observations. F.2 TAXI TRAJECTORIES SETUP The full dataset is very large and the length of trajectories varies. We select the trajectories inside the Porto city area with length in the range of 30 and 45, and only extract the first 30 coordinates of each trajectory. Thus we obtain a dataset with a fixed sequence length of 30. We split it into the training set of size 86386, the validation set of size 200, and the test set of size 10000. F.3 U.S. POLLUTION DATA SETUP The U.S. pollution dataset consists of four pollutants (NO2, O3, SO2 and O3). Each of them has 3 major values (mean, max value, and air quality index). It is collected from counties in different states for every day from 2000 to 2016. Since the daily measurements are too noisy, we firstly compute the monthly average values of each measurement, and then extract non-overlapped segments with the length of 24 from the dataset. Totally we extract 1639 sequences as training set, 25 sequences as validation set, and 300 sequences as test set. F.4 NBA SPORTVU DATA SETUP We use a sliding window of the width 30, and the stride 30 to cut the long sequences to short sequences of a fixed length 30. We split them into the training set of size 8324, the validation set of size 489, and the test set of size 980. G IMPLEMENTATION DETAILS Here, we provide implementation details of VDM models used across the three datasets in the main paper. VDM consists of • encoder: embed the first observation x0 to the latent space as the initial latent state z0. • transition network: propagate the latent states zt. • decoder: map the latent states zt and the recurrent states ht to observations xt. • inference network: update the latent states zt given observations xt. • latent GRU: summarize the historic latent states z≤t in the recurrent states ht. • discriminator: be used for adversarial training. The optimizer is Adam with the learning rate of 1e − 3. In all experiments, the networks have the same architectures but different sizes. The model size depends on observation dimension dx, latent state dimension dz, and recurrent state dimension dh. The number of samples used at each time step in the training is 2dz +1. If the model output is variance, we use the exponential of it to ensure its non-negative. • Encoder: input size is dx; 3 linear layers of size 32, 32 and 2dz, with 2 ReLUs. • Transition network: input size is dh; 3 linear layers of size 64, 64, and 2dz, with 3 ReLUs. • Decoder: input size is dh + dz; 3 linear layers of size 32, 32 and 2dx, with 2 ReLUs. • Inference network: input size is dh + dx; 3 linear layers of size 64, 64, and 2dz, with 3 ReLUs. • Latent GRU: one layer GRU of input size dz and hidden size dh • Discriminator: one layer GRU of input size dx and hidden size dh to summarize the pre- vious observations as the condition, and a stack of 3 linear layers of size 32, 32 and 1, with 2 ReLUs and one sigmoid as the output activation, whose input size is dh + dx. Stochastic Lorenz attractor. Observation dimension dx is 3, latent state dimension dz is 6, and recurrent state dimension dh is 32. Taxi trajectories. Observation dimension dx is 2, latent state dimension dz is 6, and recurrent state dimension dh is 32. U.S. pollution data6 Observation dimension dx is 12, latent state dimension dz is 8, and recurrent state dimension dh is 48. 6https://www.kaggle.com/sogun3/uspollution NBA SportVu data. Observation dimension dx is 2, latent state dimension dz is 6, and recurrent state dimension dh is 32. Here, we give the number of parameters for each model in different experiments in Table 6. H ADDITIONAL EVALUATION RESULTS We evaluate more variants of VDM in the chosen experiments to investigate the different choices of sampling methods (Monte Carlo method, and SCA) and weighting functions (Eqs. (27) and (28)). In addition to Eq. (27) described in the main text, we define one other choice in Eq. (28). ω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)) (27) ω (i) t = ω(s (i) t−1,xt)/k := 1(i = j ∼ Cat(· | ω1, . . . , ωk)), ωj ∝ p(xt | ht−1 = s (j) t−1), (28) We define the weighting function as an indicator function, in Eq. (27) we set the non-zero component by selecting the sample that achieves the highest likelihood, and in Eq. (28) the non-zero index is sampled from a categorical distribution with probabilities proportional to the likelihood. The first choice (Eq. (27)) is named with δ-function, and the second choice (Eq. (28)) is named with categorical distribution. Besides, in VDM-Net, we evaluate the performance of replacing the closed- form inference of the weighting function with an additional inference network. In Table 7, we show the choices in different variants. All models are trained with LELBO&Lpred. H.1 STOCHASTIC LORENZ ATTRACTOR H.2 TAXI TRAJECTORIES H.3 U.S. POLLUTION DATA
1. What is the focus of the reviewed paper, and what are its strengths and weaknesses? 2. What are some questions raised by the reviewer regarding the mathematical formulation of the model? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What is the contribution of the proposed approach, particularly in terms of neural representation? 5. What are the limitations regarding the NeMF approach? 6. What is the significance of the proposed modules in generating optical flow dataset? 7. How does the reviewer evaluate the paper's clarity, quality, novelty, and reproducibility? 8. What are the strengths and weaknesses of the proposed differentiable data generation pipeline? 9. What is the main contribution of the paper on dictionary learning, and what are the strengths of the paper, especially in the theoretical analysis? 10. Do you have any concerns about the semantic correspondence representation?
Review
Review Variational dynamic mixtures The paper extends the VRNN to cater for multi-modality in the probability distribution governing a dynamic system. This is particularly important when average trajectories are highly unlikely or even physically impossible. To achieve this, the authors start from VRNN and alter the inference model so that it uses stochastic recurrent states and a mixture variational posterior distribution (with 0/1 weights to trigger only the most likely mixture components encouraging multi-modality). As a minor contribution they propose a new evaluation metric for measuring the diversity of generations based on Wasserstein distance. This is important in their case when the likelihood evaluation may favour generations from a single mode - a situation their model shall prevent. The paper is very well motivated (with taxi trajectory prediction as a running use case) and well positioned with respect to the state of the art. The experimental evaluation is convincing, using both synthetic and real experiments to support the claims and shows advantages over baselines. Ablation studies examine the importance of some more ad-hoc choices (showing these help but are not critical). The paper is well written and structured to help the reader follow the main thoughts. However, there are some points in the mathematical formulation of the model which raise questions and deserve to be explained better - see below. For this reason I recommend not accepting the paper for now but I'm am very much willing to improve my score significantly once these will have been clarified. From Fig2 it seems that the stochastic states s_t do not exist in the generative model, they only live in the inference model. Right? From eq. (1) and Fig2a we have h t = ϕ G R U ( z t , h t − 1 ) and the generative distribution of p ( z t + 1 | z ≤ t ) is a function ϕ t r a ( h t ) . Is it correct to think about this as the prior distribution for z t + 1 ? My understanding of eq. (3) is that we have q ~ ( s t − 1 | x < t ) = 1 if s t − 1 = h t − 1 and zero otherwise. Is that right? You say q ( s t − 1 | x ≤ t ) = ω ( s t − 1 , x t ) , q ~ ( s t − 1 | x < t ) . How can you condition s t − 1 on the future value x t ? (Also in comparison to equation (4), where you assume q ( z t | x ≤ T ) = q ( z t | x ≤ t ) for all t thus avoiding dependence on future values of x . If my understanding of eq (3) is correct (see above), this is trivially 1 or 0 irrespective of ω . In eq (4) you say q ( z 1 : T | x 1 : T ) = Π t = 1 T q ( z t | x ≤ t ) . Is this correct? I would expect Π t = 1 T q ( z t | z < t , x ≤ t ) . Again in eq (4) you say the final result is Π t = 1 T ∫ q ( z t | s t − 1 , x t ) q ( s t − 1 | x ≤ t ) d s t − 1 . Is this correct? I would expect Π t = 1 T ∫ q ( z t | s t − 1 , x ≤ t ) q ( s t − 1 | x ≤ t ) d s t − 1 . Does this mean you assume the conditional independence q ( z t | s t − 1 , x ≤ t ) = q ( z t | s t − 1 , x t ) ? samples s t − 1 ( i ) in eq (5) are constructed by sampling z t − 1 ( i ) ∼ q ( z t − 1 | x < t ) and passing it through the recurrent net s t − 1 ( i ) ← h t − 1 ( i ) = ϕ G R U ( z t − 1 ( i ) , h t − 2 ) , right? eq (5) and annex A: How come ω is a function of x t only and not of x < t . Given your importance sampling argument, q conditions on x < t as well. Or do you assume the conditional indpepence q ( x t | s t − 1 , x < t ) = q ( x t | s t − 1 ) ? what is p ( x t | s t − 1 ) used in eq (6)? This looks like a generative distribution ( p ) but has not been defined before and s t − 1 do not exist in the generative model (figure 1a). ELBO proof in annex B: can you please provide details (equations, detailing also the conditional independence assumptions you take) for getting from eq (7) to eq (8)? Wasserstein distance in eq (14) - the differences are calculated between the generated trajectories and the corresponding true sample (the group sample) or all the n true samples? AFTER REVIEW UPDATE: I find the revised version much improved explaining the inference model much more clearly. The lack of clarity was for me the main reason for evaluating the paper as below the acceptance threshold despite the fact that otherwise I found the paper to be good and useful for the community. As the lack of clarity has now been, in my view, resolved, I increase my score to 7 - Good paper, accept.
ICLR
Title Variational Dynamic Mixtures Abstract Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Modeaveraging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains. 1 INTRODUCTION Making sense of time series data is an important challenge in various domains, including ML for climate change. One important milestone to reach the climate goals is to significantly reduce the CO2 emissions from mobility (Rogelj et al., 2016). Accurate forecasting models of typical driving behavior and of typical pollution levels over time can help both lawmakers and automotive engineers to develop solutions for cleaner mobility. In these applications, no accurate physical model of the entire dynamic system is known or available. Instead, data-driven models, specifically deep probabilistic time series models, can be used to solve the necessary tasks including forecasting. The dynamics in such data can be highly multi-modal. At any given part of the observed sequence, there might be multiple distinct continuations of the data that are plausible, but the average of these behaviors is unlikely, or even physically impossible. Consider for example a dataset of taxi trajectories1. In each row of Fig. 1a, we have selected 50 routes from the dataset with similar starting behavior (blue). Even though these routes are quite similar to each other in the first 10 way points, the continuations of the trajectories (red) can exhibit quite distinct behaviors and lead to points on any far edge of the map. The trajectories follow a few main traffic arteries, these could be considered the main modes of the data distribution. We would like to learn a generative model of the data, that based on some initial way points, can forecast plausible continuations for the trajectories. Many existing methods make restricting modeling assumptions such as Gaussianity to make learning tractable and efficient. But trying to capture the dynamics through unimodal distributions can lead either to “over-generalization”, (i.e. putting probability mass in spurious regions) or on focusing only on the dominant mode and thereby neglecting important structure of the data. Even neural approaches, with very flexible generative models can fail to fully capture this multi-modality because their capacity is often limited through the assumptions of their inference model. To address this, we develop variational dynamic mixtures (VDM). Its generative process is a sequential latent variable model. The main novelty is a new multi-modal variational family which makes learning and inference multi-modal yet tractable. In summary, our contributions are • A new inference model. We establish a new type of variational family for variational inference of sequential latent variables. By successively marginalizing over previous latent states, the procedure can be efficiently carried-out in a single forward pass and induces a multi-modal posterior 1https://www.kaggle.com/crailtap/taxi-trajectory approximation. We can see in Fig. 1b, that VDM trained on a dataset of taxi trajectories produces forecasts with the desired multi-modality while other methods overgeneralize. • An evaluation metric for multi-modal tasks. The negative log-likelihood measures predictive accuracy but neglects an important aspect of multi-modal forecasts – sample diversity. In Section 4, we derive a score based on the Wasserstein distance (Villani, 2008) which evaluates both sample quality and diversity. This metric complements our evaluation based on log-likelihoods. • An extensive empirical study. in Section 4, we use VDM to study various datasets, including a synthetic data with four modes, a stochastic Lorenz attractor, the taxi trajectories, and a U.S. pollution dataset with the measurements of various pollutants over time. We illustrate VDM’s ability in modeling multi-modal dynamics, and provide quantitative comparisons to other methods showing that VDM compares favorably to previous work. 2 RELATED WORK Neural recurrent models. Recurrent neural networks (RNNs) such as LSTMs (Hochreiter & Schmidhuber, 1997) and GRUs (Chung et al., 2014) have proven successful on many time series modeling tasks. However, as deterministic models they cannot capture uncertainties in their dynamic predictions. Stochastic RNNs make these sequence models non-deterministic (Chung et al., 2015; Fraccaro et al., 2016; Gemici et al., 2017; Li & Mandt, 2018). For example, the variational recurrent neural network (VRNN) (Chung et al., 2015) enables multiple stochastic forecasts due to its stochastic transition dynamics. An extension of VRNN (Goyal et al., 2017) uses an auxiliary cost to alleviate the KL-vanishing problem. It improves on VRNN inference by forcing the latent variables to also be predictive of future observations. Another line of related methods rely on particle filtering (Naesseth et al., 2018; Le et al., 2018; Hirt & Dellaportas, 2019) and in particular sequential Monte Carlo (SMC) to improve the evidence lower bound. In contrast, VDM adopts an explicitly multi-modal posterior approximation. Another SMC-based work (Saeedi et al., 2017) employs search-based techniques for multi-modality but is limited to models with finite discrete states. Recent works (Schmidt & Hofmann, 2018; Schmidt et al., 2019; Ziegler & Rush, 2019) use normalizing flows in the latent space to model the transition dynamics. A normalizing flow requires many layers to transform its base distribution into a truly multi-modal distribution in practice. In contrast, mixture density networks (as used by VDM) achieve multi-modality by mixing only one layer of neural networks. A task orthogonal to multi-modal inference is learning disentangled representations. Here too, mixture models are used (Chen et al., 2016; Li et al., 2017). These papers use discrete variables and a mutual information based term to disentangle different aspects of the data. VAE-like models (Bhattacharyya et al., 2018; 2019) and GAN-like models (Sadeghian et al., 2019; Kosaraju et al., 2019) only have global, time independent latent variables. Yet, they show good results on various tasks, including forecasting. With a deterministic decoder, these models focus on average dynamics and don’t capture local details (including multi-modal transitions) very well. Sequential latent variable models are described next. Deep state-space models. Classical State-space models (SSMs) are popular due to their tractable inference and interpretable predictions. Similarly, deep SSMs with locally linear transition dynamics enjoy tractable inference (Karl et al., 2017; Fraccaro et al., 2017; Rangapuram et al., 2018; Becker et al., 2019). However, these models are often not expressive enough to capture complex (or highly multi-modal) dynamics. Nonlinear deep SSMs (Krishnan et al., 2017; Zheng et al., 2017; Doerr et al., 2018; De Brouwer et al., 2019; Gedon et al., 2020) are more flexible. Their inference is often no longer tractable and requires variational approximations. Unfortunately, in order for the inference model to be tractable, the variational approximations are often simplistic and don’t approximate multi-modal posteriors well with negative effects on the trained models. Multi-modality can be incorporated via additional discrete switching latent variables, such as recurrent switching linear dynamical systems (Linderman et al., 2017; Nassar et al., 2018; Becker-Ehmck et al., 2019). However, these discrete states make inference more involved. 3 VARIATIONAL DYNAMIC MIXTURES We develop VDM, a new sequential latent variable model for multi-modal dynamics. Given sequential observations x1:T = (x1, . . . ,xT ), VDM assumes that the underlying dynamics are governed by latent states z1:T = (z1, . . . , zT ). We first present the generative process and the multi-modal inference model of VDM. We then derive a new variational objective that encourages multi-modal posterior approximations and we explain how it is regularized via hybrid-training. Finally, we introduce a new sampling method used in the inference procedure. Generative model. The generative process consists of a transition model and an emission model. The transition model p(zt | z<t) describes the temporal evolution of the latent states and the emission model p(xt | z≤t) maps the states to observations. We assume they are parameterized by two separate neural networks, the transition network φtra and the emission network φdec.To give the model the capacity to capture longer range temporal correlations we parametrize the transition model with a recurrent architecture φGRU (Auger-Méthé et al., 2016; Zheng et al., 2017) such as a GRU (Chung et al., 2014). The latent states zt are sampled recursively from zt | z<t ∼ N (µ0,t, σ20,tI), where [µ0,t, σ20,t] = φtra(ht−1), ht−1 = φGRU(zt−1,ht−2), (1) and are then decoded such that the observations can be sampled from the emission model, xt | z≤t ∼ N (µx,t, σ2x,tI), where [µx,t, σ2x,t] = φdec(zt,ht−1). (2) This generative process is similar to (Chung et al., 2015), though we did not incorporate autoregressive feedback due to its negative impact on long-term generation (Ranzato et al., 2016; Lamb et al., 2016). The competitive advantage of VDM comes from a more expressive inference model. Inference model. VDM is based on a new procedure for multi-modal inference. The main idea is that to approximate the posterior at time t, we can use the posterior approximation of the previous time step and exploit the generative model’s transition model φGRU. This leads to a sequential inference procedure. We first use the forward model to transform the approximate posterior at time t − 1 into a distribution at time t. In a second step, we use samples from the resulting transformed distribution and combine each sample with data evidence xt, where every sample parameterizes a Gaussian mixture component. As a result, we obtain a multi-modal posterior distribution that depends on data evidence, but also on the previous time step’s posterior. In more detail, for every zt, we define its corresponding recurrent state as the transformed random variable st = φGRU(zt,ht−1), using a deterministic hidden state ht−1 = E [st−1]. The variational family of VDM is defined as follows: q(z1:T | x1:T ) = T∏ t=1 q(zt | x≤t) = T∏ t=1 ∫ q(zt | st−1,xt)q(st−1 | x≤t)dst−1. (3) Chung et al. (2015) also use a sequential inference procedure, but without considering the distribution of st. Only a single sample is propagated through the recurrent network and all other information about the distribution of previous latent states z<t is lost. In contrast, VDM explicitly maintains st as part of the inference model. Through marginalization, the entire distribution is taken into account for inferring the next state zt. Beyond the factorization assumption and the marginal consistency constraint of Eq. (3), the variational family of VDM needs two more choices to be fully specified; First, one has to choose the parametrizations of q(zt | st−1,xt) and q(st−1 | x≤t) and second, one has to choose a sampling method to approximate the marginalization in Eq. (3). These choices determine the resulting factors q(zt | x≤t) of the variational family. We assume that the variational distribution of the recurrent state factorizes as q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t), i.e. it is the distribution of the recurrent state given the past observation2, re-weighted by a weighting function ω(st−1,xt) which involves only the current observations. For VDM, we only need samples from q̃(st−1 | x<t), which are obtained by sampling from the previous posterior approximation q(zt−1 | x<t) and transforming the sample with the RNN, s (i) t−1 ∼ q̃(st−1 | x<t) equiv. to s (i) t−1 = φ GRU(z (i) t−1,ht−2), z (i) t−1 ∼ q(zt−1 | x<t), (4) where i indexes the samples. The RNN φGRU has the same parameters as in the generative model. Augmenting the variational model with the recurrent state has another advantage; approximating the marginalization in Eq. (3) with k samples from q(st−1 | x≤t) and choosing a Gaussian parametrization for q(zt | st−1,xt) results in a q-distribution q(zt | x≤t) that resembles a mixture density network (Bishop, 2006), which is a convenient choice to model multi-modal distributions. q(zt | x≤t) = k∑ i ω (i) t N (µ (i) z,t, σ (i)2 z,t I), [µ (i) z,t, σ (i)2 z,t ] = φ inf (s (i) t−1,xt). (5) We assume q(zt | st−1,xt) to be Gaussian and use an inference network φinf to model the effect of the observation xt and recurrent state st−1 on the mean and variance of the mixture components. The mixture weights ω(i)t := ω(s (i) t−1,xt)/k come from the variational distribution q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t) and importance sampling3. We are free to choose how to parametrize the weights, as long as all variational distributions are properly normalized. Setting ω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)), (6) achieves this. In Appendix A, we explain this choice with importance sampling and in Appendix H, we compare the performance of VDM under alternative variational choices for the weights. In the next time-step, plugging the variational distribution q(zt | x≤t) into Eq. (4) yields the next distribution over recurrent states q̃(st | x≤t). For this, the expected recurrent state ht−1 is required. 2q̃(st−1 | x<t) is the distribution obtained by transforming the previous zt−1 ∼ q(zt−1|x<t) through the RNN. It can be expressed analytically using the Kronecker δ to compare whether the stochastic variable st−1 equals the output of the RNN: q̃(st−1 | x<t) ∝ ∫ δ(st−1 − φGRU(zt−1,ht−2))q(zt−1 | xt−1, λt−1)dzt−1. 3the ω adjusts for using samples from q̃(st−1 | x<t) when marginalizing over ω(st−1,xt)q̃(st−1 | x<t) We approximate the update using the same k samples (and therefore the same weights) as in Eq. (5). ht−1 = E[st−1] = ∫ st−1 q(st−1 | x≤t)dst−1 ≈ k∑ i ω (i) t s (i) t−1. (7) A schematic view of the generative and inference model of VDM is shown in Fig. 2. In summary, the inference model of VDM alternates between Eqs. (4) to (7). Latent states are sampled from the posterior approximation of the previous time-step and transformed by Eq. (4) into samples of the recurrent state of the RNN. These are then combined with the new observation xt to produce the next variational posterior Eq. (5) and the expected recurrent state is updated (Eq. (7)). These are then used in Eq. (4) again. Approximating the marginalization in Eq. (3) with a single sample, recovers the inference model of VRNN (Chung et al., 2015), and fails in modeling multi-modal dynamics as shown in Fig. 3. In comparison, VDM’s approximate marginalization over the recurrent states with multiple samples succeeds in modeling multi-modal dynamics. Variational objective. We develop an objective to optimize the variational parameters of VDM φ = [φtra, φdec, φGRU, φinf ]. The evidence lower bound (ELBO) at each time step is LELBO(x≤t, φ) := 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(xt | zt,ht−1 = s(i)t−1) ] + 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(zt | ht−1 = s(i)t−1) q(zt | s(i)t−1,xt) ] − 1 k k∑ i ω(s (i) t−1,xt) [ logω(s (i) t−1,xt) +C ] (8) Claim 1. The ELBO in Eq. (8) is a lower bound on the log evidence log p(xt | x<t), log p(xt | x<t) ≥ LELBO(x≤t, φ), (see proof in Appendix B) . (9) In addition to the ELBO, the objective of VDM has two regularization terms, LVDM(φ) = T∑ t=1 Epdata [−LELBO(x≤t, φ)− ω1Lpred(x≤t, φ)] + ω2Ladv(x≤t, φ) . (10) In an ablation study in Appendix E, we compare the effect of including and excluding the regularization terms in the objective. VDM is competitive without these terms, but we got the strongest results by setting ω1,2 = 1 (this is the only nonzero value we tried. This hyperparameter could be tuned even further.) The first regularization term Lpred, encourages the variational posterior (from the previous time step) to produce samples that maximize the predictive likelihood, Lpred(x≤t, φ) = logEq(st−1|x<t) [p(xt | st−1,x<t)] ≈ log 1 k k∑ i p(xt | s(i)t−1) . (11) This regularization term is helpful to improve the prediction performance, since it depends on the predictive likelihood of samples, which isn’t involved in the ELBO. The second optional regularization term Ladv (Eq. (12)) is based on ideas from hybrid adversarial-likelihood training (Grover et al., 2018; Lucas et al., 2019). These training strategies have been developed for generative models of images to generate sharper samples while avoiding “mode collapse”. We adapt these ideas to generative models of dynamics. The adversarial term Ladv uses a forward KL-divergence, which enables “quality-driven training” to discourage probability mass in spurious areas. Ladv(x≤t, φ) = DKL(p(xt | x<t)‖pD(xt | x<t)) = E [log p(xt | x<t)− log pD(xt | x<t)] (12) The expectation is taken w.r.t. p(xt | x<t). The true predictive distribution pD(xt | x<t) is unknown. Instead, we can train the generator of a conditional GAN (Mirza & Osindero, 2014), while assuming an optimal discriminator. As a result, we optimize Eq. (12) in an adversarial manner, conditioning on x<t at each time step. Details about the discriminator are in Appendix G. Stochastic cubature approximation (SCA). The variational family of VDM is defined by a number of modeling choices, including the factorization and marginal consistency assumptions of Eq. (3), the parametrization of the transition and inference networks Eqs. (4) and (5), and the choice of weighting function ω(·). It is also sensitive to the choice of sampling method which we discuss here. In principle, we could use Monte-Carlo methods. However, for a relatively small number of samples k, Monte-Carlo methods don’t have a mechanism to control the quality of samples. We instead develop a semi-stochastic approach based on the cubature approximation (Wan & Van Der Merwe, 2000; Wu et al., 2006; Arasaratnam & Haykin, 2009), which chooses samples more carefully. The cubature approximation proceeds by constructing k = 2d+1 so-called sigma points, which are optimally spread out on the d-dimensional Gaussian with the same mean and covariance as the distribution we need samples from. In SCA, the deterministic sigma points are infused with Gaussian noise to obtain stochastic sigma variables. A detailed derivation of SCA is in Appendix D. We use SCA for various reasons: First, it typically requires fewer samples than Monte-Carlo methods because the sigma points are carefully chosen to capture the first two moments of the underlying distribution. Second, it ensures a persistence of the mixture components; when we resample, we sample another nearby point from the mixture component and not an entirely new location. 4 EVALUATION AND EXPERIMENTS In this empirical study, we evaluate VDM’s ability to model multi-modal dynamics and show its competitive forecasting performance in various domains. We first introduce the evaluation metrics and baselines. Experiments on synthetic data demonstrate that VDM is truly multi-modal thereby supporting the modeling choices of Section 3, especially for the inference model. Then, experiments on real-world datasets with challenging multi-modal dynamics show the benefit of VDM over stateof-the art (deep) probabilistic time-series models. Evaluation metrics. In the experiments, we always create a training set, a validation set, and a test set. During validation and test, each trajectory is split into two parts; initial observations (given to the models for inference) and continuations of the trajectories (to be predicted and not accessible to the models). The inference models are used to process the initial observations and to infer latent states. These are then processed by the generative models to produce forecasts. We use 3 criteria to evaluate these forecasts (i) multi-steps ahead prediction p(xt+1:t+τ | x1:t), (ii) one-step-ahead prediction p(xt+1 | x1:t), and (iii) empirical Wasserstein distance. As in other work (Lee et al., 2017; Bhattacharyya et al., 2018; 2019), (i) and (ii) are reported in terms of negative log-likelihood. While the predictive distribution for one-step-ahead prediction is in closed-form, the long-term forecasts have to be computed using samples. For each ground truth trajectory x we generate n = 1000 forecasts x̂i given initial observations from the beginning of the trajectory NLL = − log ( 1 n n∑ i 1√ 2π exp ( − (x̂i − x) 2 2 )) , (13) This evaluates the predictive accuracy but neglects a key aspect of multi-modal forecasts – diversity. We propose a new evaluation metric, which takes both diversity and accuracy of predictions into account. It relies on computing the Wasserstein distance between two empirical distributions P , Q W (P,Q) = inf π ( 1 n n∑ i ‖(xi − yπ(i)‖2 ) , (14) where x and y are the discrete samples of P and Q, and π denotes all permutations (Villani, 2008). To use this as an evaluation measure for multi-modal forecasts, we do the following. We select n samples from the test set with similar initial observations. If the dynamics in the data are multimodal the continuations of those n trajectories will be diverse and this should be reflected in the forecasts. For each of the n samples, the model generates 10 forecasts and we get n groups of samples. With Eq. (14) the empirical W-distance between the n true samples, and each group of generated samples can be calculated. The averaged empirical W-distance over groups evaluates how well the generated samples match the ground truth. Repeating this procedure with different initial trajectories evaluates the distance between the modeled distribution and the data distribution. Baselines. We choose baselines from three classes of models. Two stochastic recurrent models are variational recurrent neural network (VRNN) (Chung et al., 2015) and auto-encoding sequential Monte Carlo (AESMC) (Le et al., 2018). VRNN has a similar but more powerful generative model than VDM, and AESMC uses SMC to achieve a tighter lower bound. But compared to VDM, both methods have a less powerful inference model which limits their capacity to capture multi-modal distributions. The third baseline is a deep SSM. The recurrent Kalman network (RKN) (Becker et al., 2019) models the latent space with a locally linear SSMs, which makes the prediction step and update step analytic (as for Kalman filters (Kalman, 1960)). A final baseline is the conditional flow variational autoencoder (CF-VAE) (Bhattacharyya et al., 2019), which uses conditional normalizing flows to model a global prior for the future continuations and achieves state-of-the-art performances. To investigate the necessity of taking multiple samples in the VDM inference model, we also compared to VDM(k = 1) which uses only a single sample in Eq. (5). VDM(k = 1) has a simpler generative model than VRNN (it considers no autoregressive feedback of the observations x), but the same inference model. More ablations for the modeling choices of VDM are in Appendix H. For fair comparison, we fix the dimension of the latent variables zt and ht to be the same for VDM, AESMC, and VRNN which have the same resulting model size (except for the additional autoregressive feedback in VRNN). AESMC and VDM always use the same number of particles/samples. RKN does not have recurrent states, so we choose a higher latent dimension to make model size comparable. In contrast, CF-VAE has only one global latent variable which needs more capacity and we make it higher-dimensional than zt. Details for each experiment are in Appendix G. Synthetic data with multi-modal dynamics. We generate synthetic data with two dimensions and four modes and compare the performance of VDM with 9 samples (Fig. 3, left), VDM with a single sample (Fig. 3, middle), and AESMC using 9 particles (Fig. 3, right). Since variational inference is known to try to match the aggregated posterior with the predictive prior (Tomczak & Welling, 2018), it is instructive to fit all three models and to look at their predictive prior p(z2|x≤1) and the aggregated posterior p(z2|D). Because of the multi-modal nature of the problem, all 3 aggregated posteriors are multi-modal, but only VDM(k = 9) learns a multi-modal predictive prior (thanks to its choice of variational family). Although AESMC achieves a good match between the prior and the aggregated posterior, the predictive prior does not clearly separate into different modes. In contrast, the inference model of VDM successfully uses the weights (Eq. (6)), which contain information about the incoming observation, to separate the latent states into separate modes. Stochastic Lorenz attractor. The Lorenz attractor is a system governed by ordinary differential equations. We add noise to the transition and emission function to make it stochastic (details in Appendix F.1). Under certain parameter settings it is chaotic – even small errors can cause considerable differences in the future. This makes forecasting its dynamics very challenging. All models are trained and then tasked to predict 90 future observations given 10 initial observations. Fig. 4 illustrates qualitatively that VDM (Fig. 4b) and AESMC (Fig. 4c) succeed in modeling the chaotic dynamics of the stochastic Lorenz attractor, while CF-VAE (Fig. 4d) and VRNN (Fig. 4e) miss local details, and RKN (Fig. 4f) which lacks the capacity for stochastic transitions does not work at all. VDM achieves the best scores on all metrics (Table 1). Since the dynamics of the Lorenz attractor are governed by ordinary differential equations, the transition dynamics at each time step are not obviously multi-modal, which explains why all models with stochastic transitions do reasonably well. Next, we will show the advantages of VDM on real-world data with multi-modal dynamics. Taxi trajectories. The taxi trajectory dataset involves taxi trajectories with variable lengths in Porto, Portugal. Each trajectory is a sequence of two dimensional locations over time. Here, we cut the trajectories to a fixed length of 30 to simplify the comparison (details in Appendix F.2). The task is to predict the next 20 observations given 10 initial observations. Ideally, the forecasts should follow the street map (though the map is not accessible to the models). The results in Table 2 show that VDM outperforms the other sequential latent variable models in all evaluations. However, it turns out that for multi-step forecasting learning global structure is advantageous, and CF-VAE which is a global latent variable model, achieves the highest results. However, this value doesn’t match the qualitative results in Fig. 1. Since CF-VAE has to encode the entire structure of the trajectory forcast into a single latent variable, its predictions seem to average over plausible continuations but are locally neither plausible nor accurate. In comparison, VDM and the other models involve a sequence of latent variables. As the forecasting progresses, the methods update their distribution over latest states, and the impact of the initial observations becomes weaker and weaker. As a result, local structure is captured more accurately. While the forecasts are plausible and can be highly diverse, they potentially evolve into other directions than the ground truth. For this reason, their multi-step prediction results are worse in terms of log-likelihood. That’s why the empirical W-distance is useful to complement the evaluation of multi-modal tasks. It reflects that the forecasts of VDM are diverse and plausible. Additionally, we illustrate the predictive prior p(zt|x<t) at different time steps in Fig. 5. VDM(k = 13) learns a multi-modal predictive prior, which VDM(k = 1) and AESMC approximate it with an uni-modal Gaussian. U.S. pollution data. In this experiment, we study VDM on the U.S. pollution dataset (details in Appendix F.3). The data is collected from counties in different states from 2000 to 2016. Each observation has 12 dimensions (mean, max value, and air quality index of NO2, O3, SO2, and O3). The goal is to predict monthly pollution values for the coming 18 months, given observations of the previous six months. We ignore the geographical location and time information to treat the development tendency of pollution in different counties and different times as i.i.d.. The unknown context information makes the dynamics multi-modal and challenging to predict accurately. Due to the small size and high dimensionality of the dataset, there are not enough samples with very similar initial observations. Thus, we cannot evaluate empirical W-distance in this experiment. In multi-step predictions and one-step predictions, VDM outperforms the other methods. NBA SportVu data. This dataset4 of sequences of 2D coordinates describes the movements of basketball players and the ball. We extract the trajectories and cut them to a fixed length of 30 to simplify the comparisons (details in Appendix F.4). The task is to predict the next 20 observations given 10 initial observations. Players can move anywhere on the court and hence their movement is less structured than the taxi trajectories which are constrained by the underlying street map. Due to this, the initial movement patters are not similar enough to each other to evaluate empirical Wdistance. In multi-step and one-step predictions, VDM outperforms the other baselines (Table 4). Fig. 6 illustrates qualitatively that VDM (Fig. 6b) and CF-VAE (Fig. 6d) succeed in capturing the multi-modal dynamics. The forecasts of AESMC (Fig. 6c) are less plausible (not as smooth as data), and VRNN (Fig. 6e) and RKN (Fig. 6f) fail in capturing the multi-modality. 5 CONCLUSION We have presented variational dynamic mixtures (VDM), a sequential latent variable model for multi-modal dynamics. The main contribution is a new variational family. It propagates multiple samples through an RNN to parametrize the posterior approximation with a mixture density network. Additionally, we have introduced the empirical Wasserstein distance for the evaluation of multimodal forecasting tasks, since it accounts for forecast accuracy and diversity. VDM succeeds in learning challenging multi-modal dynamics and outperforms existing work in various applications. 4A version of the dataset is available at https://www.stats.com/data-science/ A SUPPLEMENTARY TO WEIGHTING FUNCTION In this Appendix we give intuition for our choice of weighting function Eq. (6). Since we approximate the integrals in Eqs. (3) and (7) with samples from q̃(st−1 | x<t) 5 instead of samples from q(st−1 | x≤t), importance sampling tells us that the weigths should be ω(st−1,xt) = q(st−1 | x≤t) q̃(st−1 | x<t) = q(xt | st−1,x<t) q(xt | x<t) q̃(st−1 | x<t) q̃(st−1 | x<t) = q(xt | st−1,x<t) q(xt | x<t) ∝ q(xt | st−1,x<t) (15) This is consistent with out earlier definition of q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t). The weights are proportional to the likelihood of the variational model q(xt | st−1,x<t). We choose to parametrize it using the likelihood of the generative model p(xt | ht−1 = st−1) and get ω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)). (16) With this choice of the weighting function, only the mixture component with the highest likelihood is selected to be in charge of modeling the current observation xt. As a result, other mixture components have the capacity to focus on different modes. This helps avoid the effect of mode-averaging. An alternative weight function is given in Appendix H. B SUPPLEMENTARY TO LOWER BOUND Claim. The ELBO in Eq. (8) is a lower bound on the log evidence log p(xt | x<t), log p(xt | x<t) ≥ LELBO(x≤t, φ) . (17) Proof. We write the data evidence as the double integral over the latent variables zt, and z<t. log p(xt | x<t) = log ∫∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)p(z<t | x<t)dztdz<t (18) We multiply the posterior at the previous time step p(z<t | x<t) with the ratio of the approximated posterior q(z<t|x<t)q(z<t|x<t) and the ratio f(a,b) f(a,b) , where f is any suitable function of two variables a and b. The following equality holds, since the ratios equal to one. log p(xt | x<t) = log ∫ f(a,b) f(a,b) q(z<t | x<t) q(z<t | x<t) p(z<t | x<t) ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dztdz<t (19) We move the integral over z<t with respect to f(a,b)q(z<t | x<t) out of the log operation with applying the Jensen’s inequality. log p(xt | x<t) ≥ Ef(a,b)q(z<t|x<t) [ log ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dzt ] (20) − Ef(a,b)q(z<t|x<t) [ log f(a,b) + log q(z<t | x<t) p(z<t | x<t) ] We introduce the variational posterior q(zt | z<t,x≤t), and apply Jensen’s inequality to replace the intractable integral log ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dzt with its lower bound. log p(xt | x<t) ≥ Ef(a,b)q(z<t|x<t) [ Eq(zt|z<t,x≤t) [ log p(xt | z≤t,x<t)p(zt | z<t,x<t) q(zt | z<t,x≤t) ]] − Ef(a,b)q(z<t|x<t) [ log f(a,b) + log q(z<t | x<t) p(z<t | x<t) ] . (21) 5The ∼ just helps to visually distinguish the two distributions that appear in the main text. The expectation with respect to f(a,b)q(z<t | x<t) is approximated with samples. Instead of resampling the entire history, samples from previous time steps are reused (they have been aggregated by the RNN) and we sample according to Eq. (4). We plugg in the weighting function ω(s(i)t−1,xt) for f(a,b). The term log q(z<t|x<t)p(z<t|x<t) is not affected by the incoming observation xt and can be treated as a constant. In this step, we plug in our generative model and inference model as they are described in the main text for p and q. The conditional independence assumptions can be read of Fig. 2. In the generative model ht−1 and in the inference model st−1 summarize the dependencies of zt on the previous latent variables z<t and observations x<t. In other words, we assume zt is conditionally independent on z<t and x<t given s (i) t−1 in the inference model (or given ht−1 in the generative model). log p(xt | x<t) ≥ 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(xt | zt,ht−1 = s(i)t−1) ] + 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(zt | ht−1 = s(i)t−1) q(zt | s(i)t−1,xt) ] − 1 k k∑ i ω(s (i) t−1,xt) [ logω(s (i) t−1,xt) +C ] (22) C ALGORITHMS OF GENERATIVE MODEL AND INFERENCE MODEL Algorithm 1 Generative model Inputs: [µz,τ , σ2z,τ ],hτ−1 Outputs: xτ+1:T zτ ∼ N (µz,τ , σ2z,τ I) hτ = φ GRU(zτ ,hτ−1) for t = τ + 1 : T do [µ0,t, σ 2 0,t] = φ tra(ht−1) zt ∼ N (µ0,t, σ20,tI) ht = φ GRU(zt,ht−1) [µx,t, σ 2 x,t] = φ dec(zt,ht−1) xt ∼ N (µx,t, σ2x,tI) end for Algorithm 2 Inference model Inputs: x1:τ ,h0 Outputs: [µz,1:τ , σ2z,1:τ ],hτ−1 [µz,1, σ 2 z,1] = φ inf (h0,x1) for t = 2 : τ do z (i) t−1 ∼ N (µz,t−1, σ2z,t−1I) s (i) t−1 = φ GRU(z (i) t−1,ht−2) [µ (i) z,t, σ (i)2 z,t ] = φ inf (s (i) t−1,xt) ω (i) t := 1(i = argmaxj p(xt | ht−1 = s (j) t−1)) [µz,t, σ 2 z,t] = ∑k i ω (i) t N (µ (i) z,t, σ (i)2 z,t I) ht−1 ≈ ∑k i ω (i) t s (i) t−1 end for D SUPPLEMENTARY TO STOCHASTIC CUBATURE APPROXIMATION Cubature approximation. The cubature approximation is widely used in the engineering community as a deterministic method to numerically integrate a nonlinear function f(·) of Gaussian random variable z ∼ N (µz, σ2zI), with z ∈ Rd. The method proceeds by constructing 2d+1 sigma points z(i) = µz+σzξ(i). The cubature approximation is simply a weighted sum of the sigma points propagated through the nonlinear function f(·), ∫ f(z)N (z | µz, σ2zI)dz ≈ 2d+1∑ i=1 γ(i)f(z(i)) . (23) Simple analytic formulas determine the computation of weights γ(i) and the locations of ξ(i). γ(i) = { 1 2(n+κ) , i = 1, ..., 2n κ n+κ , i = 0 ξ(i) = √ n+ κei , i = 1, ..., n − √ n+ κei−n , i = n+ 1, ..., 2n 0 , i = 0 , (24) where κ is a hyperparameter controlling the spread of the sigma points in the n-dimensional sphere. Further ei represents a basis in the n-dimensional space, which is choosen to be a unit vector in cartesian space, e.g. e1 = [1, 0, ..., 0]. Stochastic cubature approximation. In SCA, we adopt the computation of ξ(i) in Eq. (24), and infuse the sigma points with standard Gaussian noise ∼ N (0, I) to obtain stochastic sigma variables s(i) = µz + σz(ξ(i) + ). We choose κ = 0.5 to set the weights γ(i) equally. E SUPPLEMENTARY TO ABLATION STUDY OF REGULARIZATION TERMS We investigate the effect of the regularization terms using the synthetic data from Fig. 3. We can see in Table 5, VDM(k = 9) can be trained successfully withLELBO only, and both regularization terms improve the performance (negative log-likelihood of multi-steps ahead prediction), while VDM(k = 1) doesn’t work whatever the regularization terms. Additionally, we tried to train the model only with the regularization terms (each separate or together) but these options diverged during training. F SUPPLEMENTARY TO EXPERIMENTS SETUP F.1 STOCHASTIC LORENZ ATTRACTOR SETUP Lorenz attractor is a system of three ordinary differential equations: dx dt = σ(y − x), dy dt = x(ρ− z)− y, dz dt = xy − βz , (25) where σ, ρ, and β are system parameters. We set σ = 10, ρ = 28 and β = 8/3 to make the system chaotic. We simulate the trajectories by RK4 with a step size of 0.01. To make it stochastic, we add process noise to the transition, which is a mixture of two Gaussians 0.5N (m0,P) + 0.5N (m2,P), where m0 = [ 0 1 0 ] , m1 = [ 0 −1 0 ] , P = [ 0.06 0.03 0.01 0.03 0.03 0.03 0.01 0.03 0.05 ] . (26) Besides, we add a Gaussian noise with zero mean and diagonal standard deviation [0.6, 0.4, 0.8] as the observation noise. Totally, we simulate 5000 sequences as training set, 200 sequences as validation set, and 800 sequences as test set. For evaluation of Wasserstein distance, we simulate 10 groups of sequences additionally. Each group has 100 sequences with similar initial observations. F.2 TAXI TRAJECTORIES SETUP The full dataset is very large and the length of trajectories varies. We select the trajectories inside the Porto city area with length in the range of 30 and 45, and only extract the first 30 coordinates of each trajectory. Thus we obtain a dataset with a fixed sequence length of 30. We split it into the training set of size 86386, the validation set of size 200, and the test set of size 10000. F.3 U.S. POLLUTION DATA SETUP The U.S. pollution dataset consists of four pollutants (NO2, O3, SO2 and O3). Each of them has 3 major values (mean, max value, and air quality index). It is collected from counties in different states for every day from 2000 to 2016. Since the daily measurements are too noisy, we firstly compute the monthly average values of each measurement, and then extract non-overlapped segments with the length of 24 from the dataset. Totally we extract 1639 sequences as training set, 25 sequences as validation set, and 300 sequences as test set. F.4 NBA SPORTVU DATA SETUP We use a sliding window of the width 30, and the stride 30 to cut the long sequences to short sequences of a fixed length 30. We split them into the training set of size 8324, the validation set of size 489, and the test set of size 980. G IMPLEMENTATION DETAILS Here, we provide implementation details of VDM models used across the three datasets in the main paper. VDM consists of • encoder: embed the first observation x0 to the latent space as the initial latent state z0. • transition network: propagate the latent states zt. • decoder: map the latent states zt and the recurrent states ht to observations xt. • inference network: update the latent states zt given observations xt. • latent GRU: summarize the historic latent states z≤t in the recurrent states ht. • discriminator: be used for adversarial training. The optimizer is Adam with the learning rate of 1e − 3. In all experiments, the networks have the same architectures but different sizes. The model size depends on observation dimension dx, latent state dimension dz, and recurrent state dimension dh. The number of samples used at each time step in the training is 2dz +1. If the model output is variance, we use the exponential of it to ensure its non-negative. • Encoder: input size is dx; 3 linear layers of size 32, 32 and 2dz, with 2 ReLUs. • Transition network: input size is dh; 3 linear layers of size 64, 64, and 2dz, with 3 ReLUs. • Decoder: input size is dh + dz; 3 linear layers of size 32, 32 and 2dx, with 2 ReLUs. • Inference network: input size is dh + dx; 3 linear layers of size 64, 64, and 2dz, with 3 ReLUs. • Latent GRU: one layer GRU of input size dz and hidden size dh • Discriminator: one layer GRU of input size dx and hidden size dh to summarize the pre- vious observations as the condition, and a stack of 3 linear layers of size 32, 32 and 1, with 2 ReLUs and one sigmoid as the output activation, whose input size is dh + dx. Stochastic Lorenz attractor. Observation dimension dx is 3, latent state dimension dz is 6, and recurrent state dimension dh is 32. Taxi trajectories. Observation dimension dx is 2, latent state dimension dz is 6, and recurrent state dimension dh is 32. U.S. pollution data6 Observation dimension dx is 12, latent state dimension dz is 8, and recurrent state dimension dh is 48. 6https://www.kaggle.com/sogun3/uspollution NBA SportVu data. Observation dimension dx is 2, latent state dimension dz is 6, and recurrent state dimension dh is 32. Here, we give the number of parameters for each model in different experiments in Table 6. H ADDITIONAL EVALUATION RESULTS We evaluate more variants of VDM in the chosen experiments to investigate the different choices of sampling methods (Monte Carlo method, and SCA) and weighting functions (Eqs. (27) and (28)). In addition to Eq. (27) described in the main text, we define one other choice in Eq. (28). ω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)) (27) ω (i) t = ω(s (i) t−1,xt)/k := 1(i = j ∼ Cat(· | ω1, . . . , ωk)), ωj ∝ p(xt | ht−1 = s (j) t−1), (28) We define the weighting function as an indicator function, in Eq. (27) we set the non-zero component by selecting the sample that achieves the highest likelihood, and in Eq. (28) the non-zero index is sampled from a categorical distribution with probabilities proportional to the likelihood. The first choice (Eq. (27)) is named with δ-function, and the second choice (Eq. (28)) is named with categorical distribution. Besides, in VDM-Net, we evaluate the performance of replacing the closed- form inference of the weighting function with an additional inference network. In Table 7, we show the choices in different variants. All models are trained with LELBO&Lpred. H.1 STOCHASTIC LORENZ ATTRACTOR H.2 TAXI TRAJECTORIES H.3 U.S. POLLUTION DATA
1. What is the main contribution of the paper, and how does it address the problem of capturing multi-modality in data? 2. What are the strengths of the proposed approach, particularly in comparison to baseline models? 3. What are the concerns regarding the paper's experimental design and related work? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. What additional experiments or analyses would further support the paper's claims?
Review
Review Summary This paper introduces variational dynamic mixtures (VDM), a new variational family, and demonstrates that using VDM to model the approximate posterior in sequential latent variable models can better capture multi-modality in data. VDM includes a distribution over recurrent states in the inference model, such that a sampling-based marginalization of this distribution reduces the approximate posterior to a mixture model. Setting the weights such that only the most probable mixture component is selected allows other mixture components to capture other modes. The authors validate VDM on both synthetic and real multimodal datasets, which outperform baselines with respect to negative log-likelihood and a new empirical Wasserstein distance. Positives This paper tackles an important and well-motivated problem: capturing multi-modality in data. This is very practical and I believe will be of interest to the ICLR community. Paper is well-written and easy to follow. I appreciate that the authors highlight their design decisions in the main text, while also providing alternatives and/or ablation studies in the appendix. VDM improves performance while also using a non-autoregressive generative model, compared to baselines with more powerful generative models (e.g. VRNN). This highlights the effectiveness of their inference model, also illustrated in the synthetic experiment in Figure 3. The inference model in VDM is also quite general, as a single-sample approximation in their inference model is equivalent to the inference model in VRNN. Concerns I think this paper would benefit from one additional dataset where the multimodality is inherent in the data. Taxi trajectories are multimodal but also highly structured (trajectories must be on roads), so a different dataset you can consider can be pedestrian or sport trajectory datasets, where the data is also inherently multimodal but also less structured. I think some baseline models can do better in this setting (at least qualitatively), so I’m curious if VDM still convincingly outperforms them. My main concern is that the pollution data is synthetically multimodal (because I think that with contextual information the data is more periodic) and the Lorenz attractor experiment only highlights that VDM can handle stochasticity, which is also present in trajectory data. The related work is missing a section about other methods that try to capture multimodality. For instance, VRNNs are known to not capture multimodality well, and there have been extensions along this direction such as in [1]. There’s also another line of work that introduces mutual information between trajectories and latent variables in the objective, such as in [2]. The “sequence forecasting” paragraph can be omitted/combined with “neural recurrent models”. The results on the taxi dataset look good. It would be great if you can also provide analysis on the resulting latent space, similar to what was done in Figure 3. [1] Goyal et al. Z-Forcing: Training Stochastic Recurrent Networks [2] Li et al. InfoGAIL: Interpretable Imitation Learning from Visual Demonstrations Minor Comments The bold/thin lines in Figure 2 can be hard to distinguish. I recommend using a dotted line instead. In Figure 3, how many timesteps are there in total? Are the blue/orange trajectories in the left plots corresponding to the blue/orange clusters in the middle/right plots? Tables 1,2 and 3 all cut through paragraphs in the middle, which can be distracting. Some quotations on page 5 use close quotations ” on both sides. Post-rebuttal comments Thank you for adding the additional experiments and analysis. The results with the basketball dataset (Table 4, Figure 6) and the visualization of the latent distribution (Figure 5) address my inital concerns and showcase the versatility of VDM. I've increased my score from 6 to 7.
ICLR
Title Variational Dynamic Mixtures Abstract Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Modeaveraging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains. 1 INTRODUCTION Making sense of time series data is an important challenge in various domains, including ML for climate change. One important milestone to reach the climate goals is to significantly reduce the CO2 emissions from mobility (Rogelj et al., 2016). Accurate forecasting models of typical driving behavior and of typical pollution levels over time can help both lawmakers and automotive engineers to develop solutions for cleaner mobility. In these applications, no accurate physical model of the entire dynamic system is known or available. Instead, data-driven models, specifically deep probabilistic time series models, can be used to solve the necessary tasks including forecasting. The dynamics in such data can be highly multi-modal. At any given part of the observed sequence, there might be multiple distinct continuations of the data that are plausible, but the average of these behaviors is unlikely, or even physically impossible. Consider for example a dataset of taxi trajectories1. In each row of Fig. 1a, we have selected 50 routes from the dataset with similar starting behavior (blue). Even though these routes are quite similar to each other in the first 10 way points, the continuations of the trajectories (red) can exhibit quite distinct behaviors and lead to points on any far edge of the map. The trajectories follow a few main traffic arteries, these could be considered the main modes of the data distribution. We would like to learn a generative model of the data, that based on some initial way points, can forecast plausible continuations for the trajectories. Many existing methods make restricting modeling assumptions such as Gaussianity to make learning tractable and efficient. But trying to capture the dynamics through unimodal distributions can lead either to “over-generalization”, (i.e. putting probability mass in spurious regions) or on focusing only on the dominant mode and thereby neglecting important structure of the data. Even neural approaches, with very flexible generative models can fail to fully capture this multi-modality because their capacity is often limited through the assumptions of their inference model. To address this, we develop variational dynamic mixtures (VDM). Its generative process is a sequential latent variable model. The main novelty is a new multi-modal variational family which makes learning and inference multi-modal yet tractable. In summary, our contributions are • A new inference model. We establish a new type of variational family for variational inference of sequential latent variables. By successively marginalizing over previous latent states, the procedure can be efficiently carried-out in a single forward pass and induces a multi-modal posterior 1https://www.kaggle.com/crailtap/taxi-trajectory approximation. We can see in Fig. 1b, that VDM trained on a dataset of taxi trajectories produces forecasts with the desired multi-modality while other methods overgeneralize. • An evaluation metric for multi-modal tasks. The negative log-likelihood measures predictive accuracy but neglects an important aspect of multi-modal forecasts – sample diversity. In Section 4, we derive a score based on the Wasserstein distance (Villani, 2008) which evaluates both sample quality and diversity. This metric complements our evaluation based on log-likelihoods. • An extensive empirical study. in Section 4, we use VDM to study various datasets, including a synthetic data with four modes, a stochastic Lorenz attractor, the taxi trajectories, and a U.S. pollution dataset with the measurements of various pollutants over time. We illustrate VDM’s ability in modeling multi-modal dynamics, and provide quantitative comparisons to other methods showing that VDM compares favorably to previous work. 2 RELATED WORK Neural recurrent models. Recurrent neural networks (RNNs) such as LSTMs (Hochreiter & Schmidhuber, 1997) and GRUs (Chung et al., 2014) have proven successful on many time series modeling tasks. However, as deterministic models they cannot capture uncertainties in their dynamic predictions. Stochastic RNNs make these sequence models non-deterministic (Chung et al., 2015; Fraccaro et al., 2016; Gemici et al., 2017; Li & Mandt, 2018). For example, the variational recurrent neural network (VRNN) (Chung et al., 2015) enables multiple stochastic forecasts due to its stochastic transition dynamics. An extension of VRNN (Goyal et al., 2017) uses an auxiliary cost to alleviate the KL-vanishing problem. It improves on VRNN inference by forcing the latent variables to also be predictive of future observations. Another line of related methods rely on particle filtering (Naesseth et al., 2018; Le et al., 2018; Hirt & Dellaportas, 2019) and in particular sequential Monte Carlo (SMC) to improve the evidence lower bound. In contrast, VDM adopts an explicitly multi-modal posterior approximation. Another SMC-based work (Saeedi et al., 2017) employs search-based techniques for multi-modality but is limited to models with finite discrete states. Recent works (Schmidt & Hofmann, 2018; Schmidt et al., 2019; Ziegler & Rush, 2019) use normalizing flows in the latent space to model the transition dynamics. A normalizing flow requires many layers to transform its base distribution into a truly multi-modal distribution in practice. In contrast, mixture density networks (as used by VDM) achieve multi-modality by mixing only one layer of neural networks. A task orthogonal to multi-modal inference is learning disentangled representations. Here too, mixture models are used (Chen et al., 2016; Li et al., 2017). These papers use discrete variables and a mutual information based term to disentangle different aspects of the data. VAE-like models (Bhattacharyya et al., 2018; 2019) and GAN-like models (Sadeghian et al., 2019; Kosaraju et al., 2019) only have global, time independent latent variables. Yet, they show good results on various tasks, including forecasting. With a deterministic decoder, these models focus on average dynamics and don’t capture local details (including multi-modal transitions) very well. Sequential latent variable models are described next. Deep state-space models. Classical State-space models (SSMs) are popular due to their tractable inference and interpretable predictions. Similarly, deep SSMs with locally linear transition dynamics enjoy tractable inference (Karl et al., 2017; Fraccaro et al., 2017; Rangapuram et al., 2018; Becker et al., 2019). However, these models are often not expressive enough to capture complex (or highly multi-modal) dynamics. Nonlinear deep SSMs (Krishnan et al., 2017; Zheng et al., 2017; Doerr et al., 2018; De Brouwer et al., 2019; Gedon et al., 2020) are more flexible. Their inference is often no longer tractable and requires variational approximations. Unfortunately, in order for the inference model to be tractable, the variational approximations are often simplistic and don’t approximate multi-modal posteriors well with negative effects on the trained models. Multi-modality can be incorporated via additional discrete switching latent variables, such as recurrent switching linear dynamical systems (Linderman et al., 2017; Nassar et al., 2018; Becker-Ehmck et al., 2019). However, these discrete states make inference more involved. 3 VARIATIONAL DYNAMIC MIXTURES We develop VDM, a new sequential latent variable model for multi-modal dynamics. Given sequential observations x1:T = (x1, . . . ,xT ), VDM assumes that the underlying dynamics are governed by latent states z1:T = (z1, . . . , zT ). We first present the generative process and the multi-modal inference model of VDM. We then derive a new variational objective that encourages multi-modal posterior approximations and we explain how it is regularized via hybrid-training. Finally, we introduce a new sampling method used in the inference procedure. Generative model. The generative process consists of a transition model and an emission model. The transition model p(zt | z<t) describes the temporal evolution of the latent states and the emission model p(xt | z≤t) maps the states to observations. We assume they are parameterized by two separate neural networks, the transition network φtra and the emission network φdec.To give the model the capacity to capture longer range temporal correlations we parametrize the transition model with a recurrent architecture φGRU (Auger-Méthé et al., 2016; Zheng et al., 2017) such as a GRU (Chung et al., 2014). The latent states zt are sampled recursively from zt | z<t ∼ N (µ0,t, σ20,tI), where [µ0,t, σ20,t] = φtra(ht−1), ht−1 = φGRU(zt−1,ht−2), (1) and are then decoded such that the observations can be sampled from the emission model, xt | z≤t ∼ N (µx,t, σ2x,tI), where [µx,t, σ2x,t] = φdec(zt,ht−1). (2) This generative process is similar to (Chung et al., 2015), though we did not incorporate autoregressive feedback due to its negative impact on long-term generation (Ranzato et al., 2016; Lamb et al., 2016). The competitive advantage of VDM comes from a more expressive inference model. Inference model. VDM is based on a new procedure for multi-modal inference. The main idea is that to approximate the posterior at time t, we can use the posterior approximation of the previous time step and exploit the generative model’s transition model φGRU. This leads to a sequential inference procedure. We first use the forward model to transform the approximate posterior at time t − 1 into a distribution at time t. In a second step, we use samples from the resulting transformed distribution and combine each sample with data evidence xt, where every sample parameterizes a Gaussian mixture component. As a result, we obtain a multi-modal posterior distribution that depends on data evidence, but also on the previous time step’s posterior. In more detail, for every zt, we define its corresponding recurrent state as the transformed random variable st = φGRU(zt,ht−1), using a deterministic hidden state ht−1 = E [st−1]. The variational family of VDM is defined as follows: q(z1:T | x1:T ) = T∏ t=1 q(zt | x≤t) = T∏ t=1 ∫ q(zt | st−1,xt)q(st−1 | x≤t)dst−1. (3) Chung et al. (2015) also use a sequential inference procedure, but without considering the distribution of st. Only a single sample is propagated through the recurrent network and all other information about the distribution of previous latent states z<t is lost. In contrast, VDM explicitly maintains st as part of the inference model. Through marginalization, the entire distribution is taken into account for inferring the next state zt. Beyond the factorization assumption and the marginal consistency constraint of Eq. (3), the variational family of VDM needs two more choices to be fully specified; First, one has to choose the parametrizations of q(zt | st−1,xt) and q(st−1 | x≤t) and second, one has to choose a sampling method to approximate the marginalization in Eq. (3). These choices determine the resulting factors q(zt | x≤t) of the variational family. We assume that the variational distribution of the recurrent state factorizes as q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t), i.e. it is the distribution of the recurrent state given the past observation2, re-weighted by a weighting function ω(st−1,xt) which involves only the current observations. For VDM, we only need samples from q̃(st−1 | x<t), which are obtained by sampling from the previous posterior approximation q(zt−1 | x<t) and transforming the sample with the RNN, s (i) t−1 ∼ q̃(st−1 | x<t) equiv. to s (i) t−1 = φ GRU(z (i) t−1,ht−2), z (i) t−1 ∼ q(zt−1 | x<t), (4) where i indexes the samples. The RNN φGRU has the same parameters as in the generative model. Augmenting the variational model with the recurrent state has another advantage; approximating the marginalization in Eq. (3) with k samples from q(st−1 | x≤t) and choosing a Gaussian parametrization for q(zt | st−1,xt) results in a q-distribution q(zt | x≤t) that resembles a mixture density network (Bishop, 2006), which is a convenient choice to model multi-modal distributions. q(zt | x≤t) = k∑ i ω (i) t N (µ (i) z,t, σ (i)2 z,t I), [µ (i) z,t, σ (i)2 z,t ] = φ inf (s (i) t−1,xt). (5) We assume q(zt | st−1,xt) to be Gaussian and use an inference network φinf to model the effect of the observation xt and recurrent state st−1 on the mean and variance of the mixture components. The mixture weights ω(i)t := ω(s (i) t−1,xt)/k come from the variational distribution q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t) and importance sampling3. We are free to choose how to parametrize the weights, as long as all variational distributions are properly normalized. Setting ω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)), (6) achieves this. In Appendix A, we explain this choice with importance sampling and in Appendix H, we compare the performance of VDM under alternative variational choices for the weights. In the next time-step, plugging the variational distribution q(zt | x≤t) into Eq. (4) yields the next distribution over recurrent states q̃(st | x≤t). For this, the expected recurrent state ht−1 is required. 2q̃(st−1 | x<t) is the distribution obtained by transforming the previous zt−1 ∼ q(zt−1|x<t) through the RNN. It can be expressed analytically using the Kronecker δ to compare whether the stochastic variable st−1 equals the output of the RNN: q̃(st−1 | x<t) ∝ ∫ δ(st−1 − φGRU(zt−1,ht−2))q(zt−1 | xt−1, λt−1)dzt−1. 3the ω adjusts for using samples from q̃(st−1 | x<t) when marginalizing over ω(st−1,xt)q̃(st−1 | x<t) We approximate the update using the same k samples (and therefore the same weights) as in Eq. (5). ht−1 = E[st−1] = ∫ st−1 q(st−1 | x≤t)dst−1 ≈ k∑ i ω (i) t s (i) t−1. (7) A schematic view of the generative and inference model of VDM is shown in Fig. 2. In summary, the inference model of VDM alternates between Eqs. (4) to (7). Latent states are sampled from the posterior approximation of the previous time-step and transformed by Eq. (4) into samples of the recurrent state of the RNN. These are then combined with the new observation xt to produce the next variational posterior Eq. (5) and the expected recurrent state is updated (Eq. (7)). These are then used in Eq. (4) again. Approximating the marginalization in Eq. (3) with a single sample, recovers the inference model of VRNN (Chung et al., 2015), and fails in modeling multi-modal dynamics as shown in Fig. 3. In comparison, VDM’s approximate marginalization over the recurrent states with multiple samples succeeds in modeling multi-modal dynamics. Variational objective. We develop an objective to optimize the variational parameters of VDM φ = [φtra, φdec, φGRU, φinf ]. The evidence lower bound (ELBO) at each time step is LELBO(x≤t, φ) := 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(xt | zt,ht−1 = s(i)t−1) ] + 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(zt | ht−1 = s(i)t−1) q(zt | s(i)t−1,xt) ] − 1 k k∑ i ω(s (i) t−1,xt) [ logω(s (i) t−1,xt) +C ] (8) Claim 1. The ELBO in Eq. (8) is a lower bound on the log evidence log p(xt | x<t), log p(xt | x<t) ≥ LELBO(x≤t, φ), (see proof in Appendix B) . (9) In addition to the ELBO, the objective of VDM has two regularization terms, LVDM(φ) = T∑ t=1 Epdata [−LELBO(x≤t, φ)− ω1Lpred(x≤t, φ)] + ω2Ladv(x≤t, φ) . (10) In an ablation study in Appendix E, we compare the effect of including and excluding the regularization terms in the objective. VDM is competitive without these terms, but we got the strongest results by setting ω1,2 = 1 (this is the only nonzero value we tried. This hyperparameter could be tuned even further.) The first regularization term Lpred, encourages the variational posterior (from the previous time step) to produce samples that maximize the predictive likelihood, Lpred(x≤t, φ) = logEq(st−1|x<t) [p(xt | st−1,x<t)] ≈ log 1 k k∑ i p(xt | s(i)t−1) . (11) This regularization term is helpful to improve the prediction performance, since it depends on the predictive likelihood of samples, which isn’t involved in the ELBO. The second optional regularization term Ladv (Eq. (12)) is based on ideas from hybrid adversarial-likelihood training (Grover et al., 2018; Lucas et al., 2019). These training strategies have been developed for generative models of images to generate sharper samples while avoiding “mode collapse”. We adapt these ideas to generative models of dynamics. The adversarial term Ladv uses a forward KL-divergence, which enables “quality-driven training” to discourage probability mass in spurious areas. Ladv(x≤t, φ) = DKL(p(xt | x<t)‖pD(xt | x<t)) = E [log p(xt | x<t)− log pD(xt | x<t)] (12) The expectation is taken w.r.t. p(xt | x<t). The true predictive distribution pD(xt | x<t) is unknown. Instead, we can train the generator of a conditional GAN (Mirza & Osindero, 2014), while assuming an optimal discriminator. As a result, we optimize Eq. (12) in an adversarial manner, conditioning on x<t at each time step. Details about the discriminator are in Appendix G. Stochastic cubature approximation (SCA). The variational family of VDM is defined by a number of modeling choices, including the factorization and marginal consistency assumptions of Eq. (3), the parametrization of the transition and inference networks Eqs. (4) and (5), and the choice of weighting function ω(·). It is also sensitive to the choice of sampling method which we discuss here. In principle, we could use Monte-Carlo methods. However, for a relatively small number of samples k, Monte-Carlo methods don’t have a mechanism to control the quality of samples. We instead develop a semi-stochastic approach based on the cubature approximation (Wan & Van Der Merwe, 2000; Wu et al., 2006; Arasaratnam & Haykin, 2009), which chooses samples more carefully. The cubature approximation proceeds by constructing k = 2d+1 so-called sigma points, which are optimally spread out on the d-dimensional Gaussian with the same mean and covariance as the distribution we need samples from. In SCA, the deterministic sigma points are infused with Gaussian noise to obtain stochastic sigma variables. A detailed derivation of SCA is in Appendix D. We use SCA for various reasons: First, it typically requires fewer samples than Monte-Carlo methods because the sigma points are carefully chosen to capture the first two moments of the underlying distribution. Second, it ensures a persistence of the mixture components; when we resample, we sample another nearby point from the mixture component and not an entirely new location. 4 EVALUATION AND EXPERIMENTS In this empirical study, we evaluate VDM’s ability to model multi-modal dynamics and show its competitive forecasting performance in various domains. We first introduce the evaluation metrics and baselines. Experiments on synthetic data demonstrate that VDM is truly multi-modal thereby supporting the modeling choices of Section 3, especially for the inference model. Then, experiments on real-world datasets with challenging multi-modal dynamics show the benefit of VDM over stateof-the art (deep) probabilistic time-series models. Evaluation metrics. In the experiments, we always create a training set, a validation set, and a test set. During validation and test, each trajectory is split into two parts; initial observations (given to the models for inference) and continuations of the trajectories (to be predicted and not accessible to the models). The inference models are used to process the initial observations and to infer latent states. These are then processed by the generative models to produce forecasts. We use 3 criteria to evaluate these forecasts (i) multi-steps ahead prediction p(xt+1:t+τ | x1:t), (ii) one-step-ahead prediction p(xt+1 | x1:t), and (iii) empirical Wasserstein distance. As in other work (Lee et al., 2017; Bhattacharyya et al., 2018; 2019), (i) and (ii) are reported in terms of negative log-likelihood. While the predictive distribution for one-step-ahead prediction is in closed-form, the long-term forecasts have to be computed using samples. For each ground truth trajectory x we generate n = 1000 forecasts x̂i given initial observations from the beginning of the trajectory NLL = − log ( 1 n n∑ i 1√ 2π exp ( − (x̂i − x) 2 2 )) , (13) This evaluates the predictive accuracy but neglects a key aspect of multi-modal forecasts – diversity. We propose a new evaluation metric, which takes both diversity and accuracy of predictions into account. It relies on computing the Wasserstein distance between two empirical distributions P , Q W (P,Q) = inf π ( 1 n n∑ i ‖(xi − yπ(i)‖2 ) , (14) where x and y are the discrete samples of P and Q, and π denotes all permutations (Villani, 2008). To use this as an evaluation measure for multi-modal forecasts, we do the following. We select n samples from the test set with similar initial observations. If the dynamics in the data are multimodal the continuations of those n trajectories will be diverse and this should be reflected in the forecasts. For each of the n samples, the model generates 10 forecasts and we get n groups of samples. With Eq. (14) the empirical W-distance between the n true samples, and each group of generated samples can be calculated. The averaged empirical W-distance over groups evaluates how well the generated samples match the ground truth. Repeating this procedure with different initial trajectories evaluates the distance between the modeled distribution and the data distribution. Baselines. We choose baselines from three classes of models. Two stochastic recurrent models are variational recurrent neural network (VRNN) (Chung et al., 2015) and auto-encoding sequential Monte Carlo (AESMC) (Le et al., 2018). VRNN has a similar but more powerful generative model than VDM, and AESMC uses SMC to achieve a tighter lower bound. But compared to VDM, both methods have a less powerful inference model which limits their capacity to capture multi-modal distributions. The third baseline is a deep SSM. The recurrent Kalman network (RKN) (Becker et al., 2019) models the latent space with a locally linear SSMs, which makes the prediction step and update step analytic (as for Kalman filters (Kalman, 1960)). A final baseline is the conditional flow variational autoencoder (CF-VAE) (Bhattacharyya et al., 2019), which uses conditional normalizing flows to model a global prior for the future continuations and achieves state-of-the-art performances. To investigate the necessity of taking multiple samples in the VDM inference model, we also compared to VDM(k = 1) which uses only a single sample in Eq. (5). VDM(k = 1) has a simpler generative model than VRNN (it considers no autoregressive feedback of the observations x), but the same inference model. More ablations for the modeling choices of VDM are in Appendix H. For fair comparison, we fix the dimension of the latent variables zt and ht to be the same for VDM, AESMC, and VRNN which have the same resulting model size (except for the additional autoregressive feedback in VRNN). AESMC and VDM always use the same number of particles/samples. RKN does not have recurrent states, so we choose a higher latent dimension to make model size comparable. In contrast, CF-VAE has only one global latent variable which needs more capacity and we make it higher-dimensional than zt. Details for each experiment are in Appendix G. Synthetic data with multi-modal dynamics. We generate synthetic data with two dimensions and four modes and compare the performance of VDM with 9 samples (Fig. 3, left), VDM with a single sample (Fig. 3, middle), and AESMC using 9 particles (Fig. 3, right). Since variational inference is known to try to match the aggregated posterior with the predictive prior (Tomczak & Welling, 2018), it is instructive to fit all three models and to look at their predictive prior p(z2|x≤1) and the aggregated posterior p(z2|D). Because of the multi-modal nature of the problem, all 3 aggregated posteriors are multi-modal, but only VDM(k = 9) learns a multi-modal predictive prior (thanks to its choice of variational family). Although AESMC achieves a good match between the prior and the aggregated posterior, the predictive prior does not clearly separate into different modes. In contrast, the inference model of VDM successfully uses the weights (Eq. (6)), which contain information about the incoming observation, to separate the latent states into separate modes. Stochastic Lorenz attractor. The Lorenz attractor is a system governed by ordinary differential equations. We add noise to the transition and emission function to make it stochastic (details in Appendix F.1). Under certain parameter settings it is chaotic – even small errors can cause considerable differences in the future. This makes forecasting its dynamics very challenging. All models are trained and then tasked to predict 90 future observations given 10 initial observations. Fig. 4 illustrates qualitatively that VDM (Fig. 4b) and AESMC (Fig. 4c) succeed in modeling the chaotic dynamics of the stochastic Lorenz attractor, while CF-VAE (Fig. 4d) and VRNN (Fig. 4e) miss local details, and RKN (Fig. 4f) which lacks the capacity for stochastic transitions does not work at all. VDM achieves the best scores on all metrics (Table 1). Since the dynamics of the Lorenz attractor are governed by ordinary differential equations, the transition dynamics at each time step are not obviously multi-modal, which explains why all models with stochastic transitions do reasonably well. Next, we will show the advantages of VDM on real-world data with multi-modal dynamics. Taxi trajectories. The taxi trajectory dataset involves taxi trajectories with variable lengths in Porto, Portugal. Each trajectory is a sequence of two dimensional locations over time. Here, we cut the trajectories to a fixed length of 30 to simplify the comparison (details in Appendix F.2). The task is to predict the next 20 observations given 10 initial observations. Ideally, the forecasts should follow the street map (though the map is not accessible to the models). The results in Table 2 show that VDM outperforms the other sequential latent variable models in all evaluations. However, it turns out that for multi-step forecasting learning global structure is advantageous, and CF-VAE which is a global latent variable model, achieves the highest results. However, this value doesn’t match the qualitative results in Fig. 1. Since CF-VAE has to encode the entire structure of the trajectory forcast into a single latent variable, its predictions seem to average over plausible continuations but are locally neither plausible nor accurate. In comparison, VDM and the other models involve a sequence of latent variables. As the forecasting progresses, the methods update their distribution over latest states, and the impact of the initial observations becomes weaker and weaker. As a result, local structure is captured more accurately. While the forecasts are plausible and can be highly diverse, they potentially evolve into other directions than the ground truth. For this reason, their multi-step prediction results are worse in terms of log-likelihood. That’s why the empirical W-distance is useful to complement the evaluation of multi-modal tasks. It reflects that the forecasts of VDM are diverse and plausible. Additionally, we illustrate the predictive prior p(zt|x<t) at different time steps in Fig. 5. VDM(k = 13) learns a multi-modal predictive prior, which VDM(k = 1) and AESMC approximate it with an uni-modal Gaussian. U.S. pollution data. In this experiment, we study VDM on the U.S. pollution dataset (details in Appendix F.3). The data is collected from counties in different states from 2000 to 2016. Each observation has 12 dimensions (mean, max value, and air quality index of NO2, O3, SO2, and O3). The goal is to predict monthly pollution values for the coming 18 months, given observations of the previous six months. We ignore the geographical location and time information to treat the development tendency of pollution in different counties and different times as i.i.d.. The unknown context information makes the dynamics multi-modal and challenging to predict accurately. Due to the small size and high dimensionality of the dataset, there are not enough samples with very similar initial observations. Thus, we cannot evaluate empirical W-distance in this experiment. In multi-step predictions and one-step predictions, VDM outperforms the other methods. NBA SportVu data. This dataset4 of sequences of 2D coordinates describes the movements of basketball players and the ball. We extract the trajectories and cut them to a fixed length of 30 to simplify the comparisons (details in Appendix F.4). The task is to predict the next 20 observations given 10 initial observations. Players can move anywhere on the court and hence their movement is less structured than the taxi trajectories which are constrained by the underlying street map. Due to this, the initial movement patters are not similar enough to each other to evaluate empirical Wdistance. In multi-step and one-step predictions, VDM outperforms the other baselines (Table 4). Fig. 6 illustrates qualitatively that VDM (Fig. 6b) and CF-VAE (Fig. 6d) succeed in capturing the multi-modal dynamics. The forecasts of AESMC (Fig. 6c) are less plausible (not as smooth as data), and VRNN (Fig. 6e) and RKN (Fig. 6f) fail in capturing the multi-modality. 5 CONCLUSION We have presented variational dynamic mixtures (VDM), a sequential latent variable model for multi-modal dynamics. The main contribution is a new variational family. It propagates multiple samples through an RNN to parametrize the posterior approximation with a mixture density network. Additionally, we have introduced the empirical Wasserstein distance for the evaluation of multimodal forecasting tasks, since it accounts for forecast accuracy and diversity. VDM succeeds in learning challenging multi-modal dynamics and outperforms existing work in various applications. 4A version of the dataset is available at https://www.stats.com/data-science/ A SUPPLEMENTARY TO WEIGHTING FUNCTION In this Appendix we give intuition for our choice of weighting function Eq. (6). Since we approximate the integrals in Eqs. (3) and (7) with samples from q̃(st−1 | x<t) 5 instead of samples from q(st−1 | x≤t), importance sampling tells us that the weigths should be ω(st−1,xt) = q(st−1 | x≤t) q̃(st−1 | x<t) = q(xt | st−1,x<t) q(xt | x<t) q̃(st−1 | x<t) q̃(st−1 | x<t) = q(xt | st−1,x<t) q(xt | x<t) ∝ q(xt | st−1,x<t) (15) This is consistent with out earlier definition of q(st−1 | x≤t) = ω(st−1,xt)q̃(st−1 | x<t). The weights are proportional to the likelihood of the variational model q(xt | st−1,x<t). We choose to parametrize it using the likelihood of the generative model p(xt | ht−1 = st−1) and get ω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)). (16) With this choice of the weighting function, only the mixture component with the highest likelihood is selected to be in charge of modeling the current observation xt. As a result, other mixture components have the capacity to focus on different modes. This helps avoid the effect of mode-averaging. An alternative weight function is given in Appendix H. B SUPPLEMENTARY TO LOWER BOUND Claim. The ELBO in Eq. (8) is a lower bound on the log evidence log p(xt | x<t), log p(xt | x<t) ≥ LELBO(x≤t, φ) . (17) Proof. We write the data evidence as the double integral over the latent variables zt, and z<t. log p(xt | x<t) = log ∫∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)p(z<t | x<t)dztdz<t (18) We multiply the posterior at the previous time step p(z<t | x<t) with the ratio of the approximated posterior q(z<t|x<t)q(z<t|x<t) and the ratio f(a,b) f(a,b) , where f is any suitable function of two variables a and b. The following equality holds, since the ratios equal to one. log p(xt | x<t) = log ∫ f(a,b) f(a,b) q(z<t | x<t) q(z<t | x<t) p(z<t | x<t) ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dztdz<t (19) We move the integral over z<t with respect to f(a,b)q(z<t | x<t) out of the log operation with applying the Jensen’s inequality. log p(xt | x<t) ≥ Ef(a,b)q(z<t|x<t) [ log ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dzt ] (20) − Ef(a,b)q(z<t|x<t) [ log f(a,b) + log q(z<t | x<t) p(z<t | x<t) ] We introduce the variational posterior q(zt | z<t,x≤t), and apply Jensen’s inequality to replace the intractable integral log ∫ p(xt | z≤t,x<t)p(zt | z<t,x<t)dzt with its lower bound. log p(xt | x<t) ≥ Ef(a,b)q(z<t|x<t) [ Eq(zt|z<t,x≤t) [ log p(xt | z≤t,x<t)p(zt | z<t,x<t) q(zt | z<t,x≤t) ]] − Ef(a,b)q(z<t|x<t) [ log f(a,b) + log q(z<t | x<t) p(z<t | x<t) ] . (21) 5The ∼ just helps to visually distinguish the two distributions that appear in the main text. The expectation with respect to f(a,b)q(z<t | x<t) is approximated with samples. Instead of resampling the entire history, samples from previous time steps are reused (they have been aggregated by the RNN) and we sample according to Eq. (4). We plugg in the weighting function ω(s(i)t−1,xt) for f(a,b). The term log q(z<t|x<t)p(z<t|x<t) is not affected by the incoming observation xt and can be treated as a constant. In this step, we plug in our generative model and inference model as they are described in the main text for p and q. The conditional independence assumptions can be read of Fig. 2. In the generative model ht−1 and in the inference model st−1 summarize the dependencies of zt on the previous latent variables z<t and observations x<t. In other words, we assume zt is conditionally independent on z<t and x<t given s (i) t−1 in the inference model (or given ht−1 in the generative model). log p(xt | x<t) ≥ 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(xt | zt,ht−1 = s(i)t−1) ] + 1 k k∑ i ω(s (i) t−1,xt)Eq(zt|s(i)t−1,xt) [ log p(zt | ht−1 = s(i)t−1) q(zt | s(i)t−1,xt) ] − 1 k k∑ i ω(s (i) t−1,xt) [ logω(s (i) t−1,xt) +C ] (22) C ALGORITHMS OF GENERATIVE MODEL AND INFERENCE MODEL Algorithm 1 Generative model Inputs: [µz,τ , σ2z,τ ],hτ−1 Outputs: xτ+1:T zτ ∼ N (µz,τ , σ2z,τ I) hτ = φ GRU(zτ ,hτ−1) for t = τ + 1 : T do [µ0,t, σ 2 0,t] = φ tra(ht−1) zt ∼ N (µ0,t, σ20,tI) ht = φ GRU(zt,ht−1) [µx,t, σ 2 x,t] = φ dec(zt,ht−1) xt ∼ N (µx,t, σ2x,tI) end for Algorithm 2 Inference model Inputs: x1:τ ,h0 Outputs: [µz,1:τ , σ2z,1:τ ],hτ−1 [µz,1, σ 2 z,1] = φ inf (h0,x1) for t = 2 : τ do z (i) t−1 ∼ N (µz,t−1, σ2z,t−1I) s (i) t−1 = φ GRU(z (i) t−1,ht−2) [µ (i) z,t, σ (i)2 z,t ] = φ inf (s (i) t−1,xt) ω (i) t := 1(i = argmaxj p(xt | ht−1 = s (j) t−1)) [µz,t, σ 2 z,t] = ∑k i ω (i) t N (µ (i) z,t, σ (i)2 z,t I) ht−1 ≈ ∑k i ω (i) t s (i) t−1 end for D SUPPLEMENTARY TO STOCHASTIC CUBATURE APPROXIMATION Cubature approximation. The cubature approximation is widely used in the engineering community as a deterministic method to numerically integrate a nonlinear function f(·) of Gaussian random variable z ∼ N (µz, σ2zI), with z ∈ Rd. The method proceeds by constructing 2d+1 sigma points z(i) = µz+σzξ(i). The cubature approximation is simply a weighted sum of the sigma points propagated through the nonlinear function f(·), ∫ f(z)N (z | µz, σ2zI)dz ≈ 2d+1∑ i=1 γ(i)f(z(i)) . (23) Simple analytic formulas determine the computation of weights γ(i) and the locations of ξ(i). γ(i) = { 1 2(n+κ) , i = 1, ..., 2n κ n+κ , i = 0 ξ(i) = √ n+ κei , i = 1, ..., n − √ n+ κei−n , i = n+ 1, ..., 2n 0 , i = 0 , (24) where κ is a hyperparameter controlling the spread of the sigma points in the n-dimensional sphere. Further ei represents a basis in the n-dimensional space, which is choosen to be a unit vector in cartesian space, e.g. e1 = [1, 0, ..., 0]. Stochastic cubature approximation. In SCA, we adopt the computation of ξ(i) in Eq. (24), and infuse the sigma points with standard Gaussian noise ∼ N (0, I) to obtain stochastic sigma variables s(i) = µz + σz(ξ(i) + ). We choose κ = 0.5 to set the weights γ(i) equally. E SUPPLEMENTARY TO ABLATION STUDY OF REGULARIZATION TERMS We investigate the effect of the regularization terms using the synthetic data from Fig. 3. We can see in Table 5, VDM(k = 9) can be trained successfully withLELBO only, and both regularization terms improve the performance (negative log-likelihood of multi-steps ahead prediction), while VDM(k = 1) doesn’t work whatever the regularization terms. Additionally, we tried to train the model only with the regularization terms (each separate or together) but these options diverged during training. F SUPPLEMENTARY TO EXPERIMENTS SETUP F.1 STOCHASTIC LORENZ ATTRACTOR SETUP Lorenz attractor is a system of three ordinary differential equations: dx dt = σ(y − x), dy dt = x(ρ− z)− y, dz dt = xy − βz , (25) where σ, ρ, and β are system parameters. We set σ = 10, ρ = 28 and β = 8/3 to make the system chaotic. We simulate the trajectories by RK4 with a step size of 0.01. To make it stochastic, we add process noise to the transition, which is a mixture of two Gaussians 0.5N (m0,P) + 0.5N (m2,P), where m0 = [ 0 1 0 ] , m1 = [ 0 −1 0 ] , P = [ 0.06 0.03 0.01 0.03 0.03 0.03 0.01 0.03 0.05 ] . (26) Besides, we add a Gaussian noise with zero mean and diagonal standard deviation [0.6, 0.4, 0.8] as the observation noise. Totally, we simulate 5000 sequences as training set, 200 sequences as validation set, and 800 sequences as test set. For evaluation of Wasserstein distance, we simulate 10 groups of sequences additionally. Each group has 100 sequences with similar initial observations. F.2 TAXI TRAJECTORIES SETUP The full dataset is very large and the length of trajectories varies. We select the trajectories inside the Porto city area with length in the range of 30 and 45, and only extract the first 30 coordinates of each trajectory. Thus we obtain a dataset with a fixed sequence length of 30. We split it into the training set of size 86386, the validation set of size 200, and the test set of size 10000. F.3 U.S. POLLUTION DATA SETUP The U.S. pollution dataset consists of four pollutants (NO2, O3, SO2 and O3). Each of them has 3 major values (mean, max value, and air quality index). It is collected from counties in different states for every day from 2000 to 2016. Since the daily measurements are too noisy, we firstly compute the monthly average values of each measurement, and then extract non-overlapped segments with the length of 24 from the dataset. Totally we extract 1639 sequences as training set, 25 sequences as validation set, and 300 sequences as test set. F.4 NBA SPORTVU DATA SETUP We use a sliding window of the width 30, and the stride 30 to cut the long sequences to short sequences of a fixed length 30. We split them into the training set of size 8324, the validation set of size 489, and the test set of size 980. G IMPLEMENTATION DETAILS Here, we provide implementation details of VDM models used across the three datasets in the main paper. VDM consists of • encoder: embed the first observation x0 to the latent space as the initial latent state z0. • transition network: propagate the latent states zt. • decoder: map the latent states zt and the recurrent states ht to observations xt. • inference network: update the latent states zt given observations xt. • latent GRU: summarize the historic latent states z≤t in the recurrent states ht. • discriminator: be used for adversarial training. The optimizer is Adam with the learning rate of 1e − 3. In all experiments, the networks have the same architectures but different sizes. The model size depends on observation dimension dx, latent state dimension dz, and recurrent state dimension dh. The number of samples used at each time step in the training is 2dz +1. If the model output is variance, we use the exponential of it to ensure its non-negative. • Encoder: input size is dx; 3 linear layers of size 32, 32 and 2dz, with 2 ReLUs. • Transition network: input size is dh; 3 linear layers of size 64, 64, and 2dz, with 3 ReLUs. • Decoder: input size is dh + dz; 3 linear layers of size 32, 32 and 2dx, with 2 ReLUs. • Inference network: input size is dh + dx; 3 linear layers of size 64, 64, and 2dz, with 3 ReLUs. • Latent GRU: one layer GRU of input size dz and hidden size dh • Discriminator: one layer GRU of input size dx and hidden size dh to summarize the pre- vious observations as the condition, and a stack of 3 linear layers of size 32, 32 and 1, with 2 ReLUs and one sigmoid as the output activation, whose input size is dh + dx. Stochastic Lorenz attractor. Observation dimension dx is 3, latent state dimension dz is 6, and recurrent state dimension dh is 32. Taxi trajectories. Observation dimension dx is 2, latent state dimension dz is 6, and recurrent state dimension dh is 32. U.S. pollution data6 Observation dimension dx is 12, latent state dimension dz is 8, and recurrent state dimension dh is 48. 6https://www.kaggle.com/sogun3/uspollution NBA SportVu data. Observation dimension dx is 2, latent state dimension dz is 6, and recurrent state dimension dh is 32. Here, we give the number of parameters for each model in different experiments in Table 6. H ADDITIONAL EVALUATION RESULTS We evaluate more variants of VDM in the chosen experiments to investigate the different choices of sampling methods (Monte Carlo method, and SCA) and weighting functions (Eqs. (27) and (28)). In addition to Eq. (27) described in the main text, we define one other choice in Eq. (28). ω (i) t = ω(s (i) t−1,xt)/k := 1(i = argmax j p(xt | ht−1 = s(j)t−1)) (27) ω (i) t = ω(s (i) t−1,xt)/k := 1(i = j ∼ Cat(· | ω1, . . . , ωk)), ωj ∝ p(xt | ht−1 = s (j) t−1), (28) We define the weighting function as an indicator function, in Eq. (27) we set the non-zero component by selecting the sample that achieves the highest likelihood, and in Eq. (28) the non-zero index is sampled from a categorical distribution with probabilities proportional to the likelihood. The first choice (Eq. (27)) is named with δ-function, and the second choice (Eq. (28)) is named with categorical distribution. Besides, in VDM-Net, we evaluate the performance of replacing the closed- form inference of the weighting function with an additional inference network. In Table 7, we show the choices in different variants. All models are trained with LELBO&Lpred. H.1 STOCHASTIC LORENZ ATTRACTOR H.2 TAXI TRAJECTORIES H.3 U.S. POLLUTION DATA
1. What is the focus of the paper regarding time series forecasting? 2. What are the strengths of the proposed approach, particularly in addressing the research issue? 3. What are the weaknesses of the paper regarding clarity and explanations? 4. How does the reviewer assess the novelty and effectiveness of the proposed method? 5. Are there any concerns or suggestions regarding the paper's content?
Review
Review This paper presented the variational dynamic mixtures as a deep probabilistic model for time series forecasting. The research issue, called the taxi trajectory prediction problem, is addressed. Some comments are provided. Pros: A new solution to mixture density network as a kind of generative model with latent states and multinomial observations was proposed. The detailed experiments were addressed. New evaluation metric was introduced. Cons: There are a number of notations and variables which were not clearly defined. This matter made the reading to be easily confused. A clear algorithm or working flow for complicated system was missing. Some descriptions were not clear.
ICLR
Title Sample-efficient policy learning in multi-agent Reinforcement Learning via meta-learning Abstract To gain high rewards in muti-agent scenes, it is sometimes necessary to understand other agents and make corresponding optimal decisions. We can solve these tasks by first building models for other agents and then finding the optimal policy with these models. To get an accurate model, many observations are needed and this can be sample-inefficient. What’s more, the learned model and policy can overfit to current agents and cannot generalize if the other agents are replaced by new agents. In many practical situations, each agent we face can be considered as a sample from a population with a fixed but unknown distribution. Thus we can treat the task against some specific agents as a task sampled from a task distribution. We apply meta-learning method to build models and learn policies. Therefore when new agents come, we can adapt to them efficiently. Experiments on grid games show that our method can quickly get high rewards. 1 INTRODUCTION Applying Reinforcement Learning (RL) to multi-agent scenes requires carefully consideration about the influence of other agents. We cannot simply treat other agents as part of the environment and apply independent RL methods (Lanctot et al., 2017) if the actions of them has impact on the payoff of the agent to be trained. For example, consider the two-person ultimatum bargaining game, where two players take part in. One player propose a deal to split a fixed amount of money for them two and the other player decides to accept it or not. If the second player accepts the proposal, they split the money, but if the proposal is refused, they both get zero. Experimental results (Güth et al., 1982) show that in actual life, the second player makes the decision according to whether he or she judge the final result fair, rather than makes the obvious rational decision. Thus, the first player needs to predict how the second player will react so as to make the proposal acceptable. In order to exploit the other agents and find the corresponding optimal policy, we need to understand these agents. Here in this paper, we call all the other agents “opponents” to distinguish our agent from them, even if they may have cooperative relationship with our agent. For simplicity, we only consider tasks with only one opponent. Extension to tasks with more opponents is straightforward. A general way to exploit an opponent is to build a model for it from observations. This model can characterize any needed feature of the opponent, such as next action or the final goal. Such a model can make predictions for the opponent and thus turns the two-agent task into a simple-agent decision making problem. Then we can apply various RL methods to solve this problem. It is necessary that we need to have an accurate model for the opponent to help make decision. Previous works (He et al., 2016; Raileanu et al., 2018) propose some methods to model the opponent. Generally, it requires many observations to get a precise model for the opponent. This may cost many iterations to act with the opponent. What’s more, even if we can precisely model the opponent, there exists a main drawback of above process that the performance of the learned policy has no guarantee for any other opponent. Things are even worse if opponents have their private types which are unknown for us. New opponents with different types can have different policies or even different payoffs. Therefore, it seems that when a new opponent came, we have to learn a policy from the beginning. In some practical situations, the whole opponents follow a distributions over all these possible types. Let’s come back to the ultimatum bargaining game. Bahry & Wilson (2006) shows that people with different ethnicity may have different standards for fairness. Thus if we assume the type for player 2 to be its judgment for fairness, there can be a distribution for types dependent on the ethnic distribution. Given that opponents follows a distribution, it is possible that we can employ some given opponents to help us speed up the process of opponent modeling and policy improving for the current opponent. If we consider the policy learning against a specific opponent as a task, our goal can be considered as training a policy on various tasks so that it can efficiently adapt to a good policy on a new task with few training samples. This is exactly a meta-learning problem. We employ Model-Agnostic MetaLearning (MAML)(Finn et al., 2017) to conduct meta-learning. Rabinowitz et al. (2018) applied meta-learning to understand opponents, but this work doesn’t address the policy improvement for the agent to be trained. We apply meta-learning to opponent modeling and policy learning separately while training the two meta-learners jointly. Then we use the meta-learners to initialize the model and policy for the new opponent. Experimental results show that the agent can adapt to the new opponent with a small number of interactions with the opponent. 2 PRELIMINARY In this section, we introduce some preliminaries of our work. We formalize our task based on stochastic games(Littman, 1994), which is a general framework for multi-agent problems, and Bayesian games(Harsanyi, 1967), which formalize the incomplete information of players. Next we introduce Model-Agnostic Meta-Learning (MAML), a meta-learning algorithm that is applicable to various gradient-based methods. Our approach employs MAML as the meta-learning method to train across tasks. 2.1 FORMALIZATION We first introduce stochastic games. Formally, a stochastic game, with N players, is defined as 〈N,S, {A1, ..., AN}, T, {R1, ..., RN}〉, where S is the set of states, Ai is the action set for player i, Ri : S×A1×...×AN → ∆([0, 1]) is the reward function for player i (∆(C) denotes the probability distributions over a set C), and T : S × A1 × ... × AN → ∆(S) is the transition function. The goal for player i is to maximize its own cumulative reward E[ ∑T t=0 γ tri,t], where ri,t is the reward player i gets at time step t and γ is the discounting factor. πi : S → ∆(Ai) denotes the policy for player i, which maps each state to a distribution over actions. Our agent takes the role of one player. With out loss of generality, we assume that our agent is player 1. In this paper we only consider two-player stochastic games. Bayesian games further introduce types for players. Since our agent takes the role of player 1, the different types of player 1 can be considered as different states. We only consider that player 2 has a set of types Θ. A specific opponent playing player 2 has its own θ ∈ Θ which is unknown to player 1. There exists a prior distribution p(θ) over the population of opponents. Under this setting, each θ ∈ Θ has its corresponding reward function Ri(s, a1, a2, θ) for each i ∈ {1, 2}, s ∈ S, a1 ∈ A1 and a2 ∈ A2. Therefore combining the above concepts we formalize our tasks as Bayesian stochastic games 〈N,S, {A1, ..., AN}, T, {R1, ..., RN},Θ, p〉, where R1, ..., RN are the modified reward functions dependent on θ ∈ Θ and p is the prior distribution over Θ for types of player 2. 2.2 MODEL-AGNOSTIC META-LEARNING Meta-learning considers the goal that aims to quickly train a model for a new task with the help of data on many similar tasks. Formally, we denote {T }N1i=1 the given N1 tasks used for training. Then N2 more new tasks {T }N1+N2i=N1+1 are used for testing. In the meta-learning setting, we assume that the tasks are sampled from a distribution p(T ) over all possible tasks. For each task Ti, we want to identify a mapping fi, that maps each input x to its corresponding output y. Model-Agnostic Meta-Learning (MAML) is one of the best meta-learning algorithms that can be applied to models that are trained with gradient descent. Denote the parameter for each fi as ψ′i. The loss for fψ′i on task Ti is denoted as LTi(fψ′i). A meta-learner with parameter ψ is used as the initialization for all ψ′is and the update for each specific task is: ψ′i = ψ − α∇ψLTi(fψ), where α is the learning rate. The update for ψ is: ψ = ψ − β∇ψ N1∑ i=1 LTi(fψ′i), where β is the learning rate for ψ. 3 OUR APPROACH Before we dive into the Bayesian stochastic games, we first consider the stochastic game that we have a specific opponent with type θ ∈ Θ. We aim to explicitly model the opponent so that we can get predictions about it. The predicted value should be some character of the opponent that can help our agent improve policy. For example, in games where our reward is related to the final goal of the opponent, we can directly predict its goal. Formally, at some state st ∈ S, our agent predicting some estimated value ṽ(·|st, θ), where ṽ is the estimation of value v. v represent a character of the opponent such as goals, next actions or next positions. For convenience, we use vθ to denote v(·|θ). Then our agent can choose an action according to π1(·|st, ṽθ(st)), while an agent unaware of its opponent can only take action according to π1(·|st). Thus, the task has been divided into two subtasks: modeling ṽθ of the opponent and learning a policy π1(·|ṽθ). The latter task can be considered as a general RL problem and we can apply various RL methods to solve it. Now assume that opponents have a prior distribution for their types. That is, an opponent with type θ ∈ Θ can be treated as a sample following θ ∼ p(θ). If we can collect data from some opponents sampled from p, it is possible we can generalize the model and policy to the game with a new opponent. Thus we can consider the opponent modeling and policy learning as two metalearning tasks. The former can be considered as an imitation learning or supervised learning task while the second is a RL task. Both can apply MAML for meta learning. Since the learned policy needs the model to make prediction, we cannot train the two meta-tasks independently. We jointly training the model and policy with some given opponents. We call our method as Meta-OpponentAgent learning (MOA) to indicate that the model and policy are jointly trained via meta-learning. Finally when a new opponent comes, we initialize our model and policy with the meta-model and meta-policy. The training procedure is shown in the figure 1. 3.1 OPPONENT MODELING The opponent in our game is considered as some player that won’t adapt its policy to our agent. This assumption can be true in many practical situations. For example, the consumers of a business man usually have stable preferences. Then we further assume that there exist a distribution over the policies of the opponents. This assumption can also be considered true for the business man situation, where a specific consumer is just a sample from the whole population. Our goal here is to model the current opponent with the help of data that are collected by playing with other opponents. Formally, we now aim to model the value function v for the opponent from the observations. As mentioned above, v here can represent many forms of characters of the opponent, such as the final goal, next action, next position or any other character we wish to predict. For each type θ ∼ p, the value function is specified as vθ. Now we have M opponents sampled from the population. Opponent i has a type θi ∼ p(θ). For any given state s as an input for the opponent i, the model outputs ṽθi(s). The task of minimizing ṽθi and vθi are treated as task T oi = {(s, vθi(s)} with loss function LT oi = dist(vθi , ṽθi), where dist(·, ·) is the distance metric for two v functions. The distance function can vary for different problems. Following the framework of MAML, we use a network called opponent network (OPNet) f with parameter ψ to learn the function ṽ. Then for each opponent, we collect data Si = {(s, vθi(s)} and then use this dataset to update ψ to get the adapted parameter ψ′i. New dataset S ′ i = {(s, vθi(s)} is collected with fψ′i . {S ′ i}Mi=1 are together used to update ψ. Finally we use the learned ψ to initialize the parameter for the current opponent as fψ . The updating process for parameters follows the framework of MAML. Following Grant et al. (2018), we are learning an empirical Bayesian model over the opponent population after training with the M opponents. When data for the new opponent are observed, it is easier to adapt to the new opponent with such a prior model. 3.2 POLICY LEARNING FOR THE AGENT With a model of the opponent, our agent can give a better policy than the policy unaware of the opponent. The policy learning for our agent is also trained by a meta-learning process, similar to the above opponent modeling process. To distinguish the notation for opponent modeling, we use φ to represent the parameter of agent’s policy. The agent can employ various RL methods to learn the policy. We use the Dueling DQN (Wang et al., 2016) as our learning method. We use gφ to denote the Dueling DQN mapping with parameter φ. The M opponents bring M meta-training tasks {T ai }Mi=1. For opponent i, at state st, OPNet predict a value fψ(st). The policy πai of our agent is defined on the new state (st, fψ(st)) and the action at the agent chooses is sampled from πai (st, fψ(st)). Then the agent gets a immediate reward rt. We collect a dataset Di = {(st, fψ(st), at, rt)} for the task T ai . Similar to the above part, we update φ with Di to get adapted parameter φ′i. Next, we use fψ′i and γφ′i to get dataset D ′ i = {(st, fψ′i(st), at, rt). Parameter φ is in return updated with {D′i}Mi=1. When finally the agent meet the new opponent, it uses gθ as the initialization for its policy and improve its policy. From the Bayesian point of view, we can consider the learned gφ is the approximated Bayesian Optimal Policy against the opponent distribution. When we meet a new opponent, we initialize the policy with gφ to accelerate the learning by guiding the agent to explore the potential direction. 3.3 ALGORITHM We train fψ and gφ jointly, since the prediction of the opponent is considered as part of the input for the agent. More concretely, for each iteration, our agent play with each opponent i ∈ [M ]. Our agent use the OPNet to predict the opponent, and the Dueling DQN use the prediction as part of its input to give a policy. Then both OPNet and Dueling DQN are updated. The algorithm is shown in Algorithm 1. 4 RELATED WORK Many works concentrate on multi-agent Reinforcement learning tasks. Works like Lanctot et al. (2017) connect these tasks with game theory to modify general RL methods to multi-agent scenes. Some works aims to solve equilibriums for specific games. For example, Silver et al. (2017) proposes a self-play deep RL method for two-player zero-sum perfect-information games and their work on Go outperforms human beings. Some other researches aims to learn policies for agents via Algorithm 1 Meta-Opponent-Agent Learning Require: p(θ) over types of opponents Initialization: Get M opponent samples T oi , each with type θi ∼ p(θ). Initialize fψ , gφ. repeat for all T oi do Agent use fψ and gφ to play with opponent i. Collect data Si = {(st, vθi(st))} and Di = {(st, fψ(st), at, rt)}. Evaluate∇ψLT oi (fψ) and ∇φLT ai (gφ) using Si. Compute parameters ψ′i = ψ−α∇ψLT oi (fψ) with Si and φ ′ i = φ−α∇φLT ai (gφ) with Di. Agent use fψ′i and gφ′i to play with opponent i. Collect data S ′ i = {(s,t , vθi(st))} and Di = {(st, fψ′i(st), at, rt)}. end for Update ψ = ψ − β∇ψ ∑ T oi LT oi (fψ′i) with S ′ i and φ = φ− β∇φ ∑ T ai LT ai (fφ′i) with D ′ i until Done imitation learning (Oh et al., 2014; Thurau et al., 2004). These works just wish to identify good polices and don’t aim to exploit specific opponents. There are also some works address opponent modeling. Raileanu et al. (2018) proposes a method to automatically infer the goal of others by using agent itself. However, this work is not suitable for games that are not goal-directed. What’s more, concentrating on specific opponents can lead to a weak policy against other opponents. Works like Johanson et al. (2008); Ganzfried & Sandholm (2015) attempt to model specific opponents while learning a robust policy. These works address the problem from the game-theoretical point of view. They don’t have assumptions on opponents. If we consider the opponent population and assume that there are some prior distribution over the policies of opponents, we are in fact easy to infer information for the current opponent with the help of other opponents. Rabinowitz et al. (2018) connects opponent modeling with theory of mind, a concept from psychology. This work use meta-learning to build flexible and sample efficient models for opponents. However, it ignores the the process for policy learning. Our work attempts to gain information for both opponent modeling and policy improving from the given opponents. 5 EXPERIMENTS In this section, we test our method on three different kinds of two-player games, each with some specific uncertainty: • Chasing game: a game where the opponent has a private type set with finite elements; • Blocking game: a game where the opponent has a private type set with infinite elements; • Recommending game: a game where the opponent has private type set with infinite ele- ments and the agent has random reward functions. All these games are grid games where both player 1 and 2 choose a one-step direction as its action. In the grid world, each action can only move to the grid next to it or stay at its current position for one step. Since all these games are based on grid worlds, we choose the value function for the opponent as the goals or the next position of the opponent. Thus, we choose cross entropy as dist(vθ, ṽθ) in section 3.1. For each game, we test our method MOA and three other baseline methods. We first introduce these baselines and show the experimental results for each game. Further, in the design of games, we don’t set the rewards bounded by [0, 1]. We make rewards larger to make it easy to train. 5.1 BASELINE METHODS Meta-Opponent (MO): For this method, we only train the model for opponents when playing with the training opponents. Then we use this model to initialize our model for the new opponent and our agent directly learn from the beginning. More concretely, we only train the parameter ψ and then train ψ′M+1 and φ ′ M+1 for the new opponent M + 1. This method is used to show that whether training the agent via MAML can help the agent more efficiently adapt to a new opponent. Meta-agent without model (MA): We don’t model the opponents in this method. Dueling DQN is used to directly learn πa(·|s) for s ∈ S via MAML. This method is used to show the effect of models. No Meta-Learning (NM): To demonstrate that meta-learning can do take advantage of the information from other opponents and learn a good policy with fewer samples, we directly train the model and the agent for the new coming opponent. 5.2 CHASING GAME In the chasing game, a grid board of size 8×8 is given. as shown in figure 2a. Player 1 is represented as the red grid and player 2 is the green one. Player 2 has a private goal, which can be considered as its specific type and is unknown to player 1. The goal is one specific gird on the map. Each player 2 has a specific goal, while the player 2 population have a distribution over the 64 grids. That is, the set of types is a finite set. In this chasing game, the grids that are close to the top left corner are preferred. The probability of the goal’s location over the map is visualize in figure 2b. The rule for the game is as follows. Both players takes actions simultaneously. One game lasts for at least 15 steps. When a game begin, player 2 will go directly to its goal and stops there. The only way for player 1 to get rewards is to chase player 2 to its goal before the game ends. If player 1 finds player 2, it gets a reward of 10 and the game ends. If it is at one of the 8 neighborhood grids of player 2 at the end of the game, it gets a reward 5. Otherwise, it has reward 0. We test the four methods, MOA, MA, MO and NM. For MOA, MA and MO, they all require the meta-training process. 20 opponents are sampled as the meta tasks. Each method trains 800 iterations to get the meta learners and use them to initialize their networks. Then 10 new opponents are sampled as testing tasks. Four methods all train 4000 games for each testing task. We compare their performance along the testing process by averaging the rewards on the 10 testing tasks. In this game, the type for player 2 is its goal and we directly model the goal for player 2. Figure 2c gives the results of four methods during the testing process. We plot the average rewards over 10 opponents. It is easy to see that MOA outperforms the other methods. Notice that the reward trend for MOA first drops and then raises as the testing process goes on. This shows the process that the meta-learner adapt to the current task. Intuitively, the meta-model would first update itself to the current opponent. Then the meta-policy would improve itself to fit the model. NM learns the testing task without meta-training for neither the model nor the policy. It can still improve its policy but cost many more games than MOA. The comparison of MOA and NM shows that we can gain benefits by training across opponents via meta-learning. MO method just train the meta-model. The result shows that MO just performs similar to NM. This result show that simply training the model of opponent cannot help improve efficiency. MA performs the worst among all these methods, even though it has a meta-training process. This is because it doesn’t build a model for the opponent. Ignoring the existence of the opponent just results in the failure to improve policy. 5.3 BLOCKING GAME In blocking games, as shown in figure 3a, has a 9*7 size map. In the initial state, player 1 is the red grid and player 1 is the green one. The goal for player 2 is to pass one of the five ways to reach the top two rows, while the goal of the player 1 is to block player 2 to reach the goal area. There are 5 paths that player 2 can pass to get the goal area and the type for player 2 is the probability of choosing each path. Thus the type set for player 2 is a simplex, which has infinite elements. Each path has only one exit. If player 1 can block player 2 at the exit, player 1 gets a reward 10. Otherwise player 2 will pass the exit and reward 1 will get -10. The training setting for blocking games is a bit different from chasing games. Only 15 opponents are sampled as the meta tasks. Each opponent has a distribution over 5 path and it samples one path for one game. The prior distribution for opponent’s type is the Dirichlet distribution with five 0.5 as parameters. Each method trains 800 iterations to get the meta-parameters. Then 10 new opponents are sampled as testing tasks. Four methods all train 4000 games. In this game, it is hard to directly model the types of opponents as the value thus we simply choose its next position as the value. Figure 3b shows the performance of MA and MOA along the meta training process. After 50 iterations, we collect the rewards of our agent gets with the 15 training opponents. Since our agent plays a random policy, we average the rewards against each opponent of 100 games. Notice that MO only has its model trained in the training process, we don’t test it. The result shows that MOA can improve itself quickly while MA can hardly improve. This again demonstrate the importance of opponent modeling. Figure 3c gives the rewards along testing process. It is easy to see that MOA can use less than 500 games to adapt to the new opponents while MO and NM improves slowly. Again MO and NM performs similarly. The results are similar to that of chasing games. 5.4 RECOMMENDING GAME A recommending game, as shown in figure 4a, has a 7*7 size map. Player 1 is red and player 2 is green. There are 4 blue grids on the left of the map, which are goals for player 2. There are also 4 purple girds on the right of the map, which are objects for player 1. This game is similar to the process that a business man recommends goods for his current consumer. In this game, player 2 also has a private distribution over the 4 goals. The distribution is considered as the type and the prior distribution is a Dirichlet distribution with four 0.5 as parameters. Player 2 samples a goal from its type distribution and goes to its goal directly. Player 1 needs to recommend one of the 4 purple objects to player 2. When player 1 reaches one of the objects or the game is played 16 steps, the game ends. Player 1 only gets rewards when it reaches an object. Assume that the vertical coordinate for the goal of player 2 is y2 and that of the recommended object is y1. Then the reward for player 1 is a sample from Gaussian distribution N (µ, 1), where µ = 10− 3/2 ∗ |y1 − y2|. Details of experiments is almost the same as blocking games, except that we choose player 2’s goal as its predicting value. Figure 4b demonstrate again that MOA performs well during meta-training. Figure 4c shows that MOA outperforms the other three methods during the testing process. MOA is indeed sample-efficient. The results for recommending games is similar to chasing games and blocking games. The random rewards just bring more variance to the training and testing process. Finally we test all the four methods on 100 opponents sampled from the opponent distribution. For each opponent, we just play one game with it. That is, we don’t conduct any learning process for new opponents. The results for all three games are given in the table 1. As the table shows, MOA can reaches relatively high rewards while other methods performs bad. The results demonstrate that MOA can gain prior information from meta-learning process. 6 CONCLUSION In the face of other agents, it is beneficial to build models for opponents and find a corresponding good policy. This method can be sample-inefficient since it costs many observations to build models and learn a policy then. We propose a method that can employ the information learned from experiences with other opponents to speed up the learning process for the current opponents. This method is suitable for many practical situations where the opponent population has a relative stable distribution over their policies. We apply meta-learning to jointly train the opponent modeling and policy improving process. Experimental results show that our method can be sample-efficient.
1. What are the strengths and weaknesses of the proposed approach in the paper regarding its contribution to fast adaptation to new behavior in reinforcement learning? 2. How does the reviewer assess the quality and coherence of the experiments conducted in the paper, particularly regarding the chasing game and blocking game tasks? 3. What are the concerns regarding the baseline method used in the comparison, specifically the NM baseline, and how does it affect the interpretation of the results? 4. Are there any issues with the hyperparameters used or training loop in the experiments that need to be addressed? 5. How might the action space parametrization affect the dynamic of the blocking game, and what implications does this have for the perfect play possible without meta-learning or opponent modeling? 6. What are the inconsistencies in the reported rewards in the one-shot transfer setting, and how might they relate to the description of the reward function? 7. Are there any issues with the lack of precision in some aspects of the experiment, such as the description of the opponent's goal in the chasing game, termination condition, and opponents sampled from the distribution? 8. How might typos and grammatical mistakes in the paper affect its overall clarity and interpretability?
Review
Review This paper focuses on fast adaptation to new behaviour of the other agents of the environment, be it opponents or allies. To achieve this, a method based on MAML is proposed, with two main components: 1) Learn a model of some characteristics of the opponent, such as "the final goal, next action, or any other character we wish to predict" 2) Learn a policy that takes as input the output of the model and the state, and that outputs the action of the agent. The goal is that after a phase of meta learning, where the agents learns how to play against some new agents sampled from the distribution of opponents, it can quickly adapt to a new unseen agent. ("Experimental results show that the agent can adapt to the new opponent with a small number of interactions with the opponent") While the motivation of this work is clear and the goal important for the RL community, the experiments fail to support the claim above. The first task they demonstrate their approach on is a chasing game, where the opponent has a private goal cell it tries to reach, and the agent has to chase it. At the end of the game, it gets a reward of 10 if it is on the same cell, 5 if in an adjacent cell, and 0 otherwise. The exact details of the dynamic are not really clear, for example what happens in the event of a collision is not mentionned, and the termination condition is not mentionned either. (the text reads "One game lasts for at least 15 steps", maybe it was meant to be "at most 15 steps" ?). The first incoherent aspect of this experiment is that they use 800 iterations of meta-learning, and then, when testing, they fine-tune their networks against each test opponent during 4000 games. That is, they use 5 times more game when fine-tuning as opposed to when pre-training, which contradicts the claim "the agent can adapt to the new opponent with a small number of interactions with the opponent" (this is not really few-shot learning anymore). Further more, they compare their approach with various ablations of it: they either remove the meta-learning for the model (MA), for the policy (MO), or both (NM). The description of the NM baseline is not very precise, but it seems that it simply boils down to a regular (dueling) DQN: In this setting, since the opponent appears to have a fixed goal, finetuning against a single opponent simply boils down to learning a policy that reaches a specific cell of the grid, which we can expect DQN to solve perfectly on a 8x8 grid with 4000 training games. And yet, the curves for NM in graph 2c is not only really noisy, but also falls far from the optimum, which the authors don't discuss. There might be a problem with the hyperparameters used or the training loop. The second task is a blocking game: the opponent has to choose amongst 5 paths to get to the top, and the agent has to choose the same path in order to block it. The action space should be precisely described, as it stands it is difficult to understand the dynamic. There are at least two possible ways to parametrize the actions: 1) Similarly to the blocking game, the agents could move in the 8 directions. In that case, based on the picture 3a, it seems that the agent can just mirrors the move of the opponent: since the moves are simultaneous, that would mean that the agent is always one step late, but each path is long enough for the agent to reach the exit before its opponent (it explicitly stated that the agent needs to block the exit, and that the opponent will not change path during one game). That would imply that perfect play is possible without any meta-learning or oponent modeling, and once again the NM baseline (or any vanilly DQN/Policy gradient method) should perform much better. 2) One other alternative is to have an action space of 5 actions, which correspond to the 5 paths. In that case the game boils down to a bandit, since both agents only take one action. Note that under this assumption, the random policy would get the right path (and reward +10) with probability 1/5 and a wrong one (reward -10) with probability 4/5, which leads to an expectated reward of -10*4/5 + 10/5 = -6. This is not consistent with the graph 3c, since at the beginning of the training, the NM agent should have a random policy, and yet the graph reports an average reward of -10 (the -6 mark seems to be reached after ~1000 episodes) The last task boils down to one opponent that reaches one cell on the right, and the agent must reach the matching cell on the left. In this setting, the same discussion on the action space as the second task can be made. We note that the episode for 16 steps, and the distance from the center to any cell is at most 4 steps: an optimal policy would be to wait for 4 steps in the middle, and as soon as the opponent has reached its goal, use the remaining 12 steps to get to the mirror one. Once again, this policy doesn't require any prediction on the opponent's goal, and it's hard to believe that DQN (possibly with an lstm) is not able to learn that near perfectly. In a last test the authors compare the performance of their algorithms in a one shot transfer setting: they sample 100 opponents for each task and play only one game against it (no fine-tuning). It is not clear whether special care has been taken to ensure that none of the sampled opponents has already been seen during training. We note that the rewards reported for MO and MA (resp 0.0 and -0.08) are not consistent with the description of the reward function: on the worst case, the opponent chooses a goal on one extreme (say y1 = 1) and the agent chooses an object on the other end (say y2 = 7). In that case, the reward obtained is sampled from a gaussian with mean \mu = 10 - 3/2 * |y1 - y2| (which in this case evalutes to 1), and variance 1. This is highly unlikely to give such a low average reward over 100 episodes (note that this is worst case, if the opponent's goal is not on the extreme, the expected reward is necessarily higher). One possibility is that the agent never reaches an object, but in that case it would imply the that the meta-learning phase was problematic. We also note that it is explicited that the MOA, MO and MA methods are tested after meta-training, but nothing is precised for NM. Has it been trained at all? Against which opponents? Is it just a random policy? There are too many missing details for the results to be interpretable. Apart from that, the paper contains a significant amount of typos and gramatical mistakes please proof-read carefully. Some of them are: "To demonstrate that meta-learning can do take" "player 1 is the red grid and player 1 is the green one" "we further assume that there exist a distribution" " the goal’s location over the map is visualize in figure" "Both players takes actions simultaneously"
ICLR
Title Sample-efficient policy learning in multi-agent Reinforcement Learning via meta-learning Abstract To gain high rewards in muti-agent scenes, it is sometimes necessary to understand other agents and make corresponding optimal decisions. We can solve these tasks by first building models for other agents and then finding the optimal policy with these models. To get an accurate model, many observations are needed and this can be sample-inefficient. What’s more, the learned model and policy can overfit to current agents and cannot generalize if the other agents are replaced by new agents. In many practical situations, each agent we face can be considered as a sample from a population with a fixed but unknown distribution. Thus we can treat the task against some specific agents as a task sampled from a task distribution. We apply meta-learning method to build models and learn policies. Therefore when new agents come, we can adapt to them efficiently. Experiments on grid games show that our method can quickly get high rewards. 1 INTRODUCTION Applying Reinforcement Learning (RL) to multi-agent scenes requires carefully consideration about the influence of other agents. We cannot simply treat other agents as part of the environment and apply independent RL methods (Lanctot et al., 2017) if the actions of them has impact on the payoff of the agent to be trained. For example, consider the two-person ultimatum bargaining game, where two players take part in. One player propose a deal to split a fixed amount of money for them two and the other player decides to accept it or not. If the second player accepts the proposal, they split the money, but if the proposal is refused, they both get zero. Experimental results (Güth et al., 1982) show that in actual life, the second player makes the decision according to whether he or she judge the final result fair, rather than makes the obvious rational decision. Thus, the first player needs to predict how the second player will react so as to make the proposal acceptable. In order to exploit the other agents and find the corresponding optimal policy, we need to understand these agents. Here in this paper, we call all the other agents “opponents” to distinguish our agent from them, even if they may have cooperative relationship with our agent. For simplicity, we only consider tasks with only one opponent. Extension to tasks with more opponents is straightforward. A general way to exploit an opponent is to build a model for it from observations. This model can characterize any needed feature of the opponent, such as next action or the final goal. Such a model can make predictions for the opponent and thus turns the two-agent task into a simple-agent decision making problem. Then we can apply various RL methods to solve this problem. It is necessary that we need to have an accurate model for the opponent to help make decision. Previous works (He et al., 2016; Raileanu et al., 2018) propose some methods to model the opponent. Generally, it requires many observations to get a precise model for the opponent. This may cost many iterations to act with the opponent. What’s more, even if we can precisely model the opponent, there exists a main drawback of above process that the performance of the learned policy has no guarantee for any other opponent. Things are even worse if opponents have their private types which are unknown for us. New opponents with different types can have different policies or even different payoffs. Therefore, it seems that when a new opponent came, we have to learn a policy from the beginning. In some practical situations, the whole opponents follow a distributions over all these possible types. Let’s come back to the ultimatum bargaining game. Bahry & Wilson (2006) shows that people with different ethnicity may have different standards for fairness. Thus if we assume the type for player 2 to be its judgment for fairness, there can be a distribution for types dependent on the ethnic distribution. Given that opponents follows a distribution, it is possible that we can employ some given opponents to help us speed up the process of opponent modeling and policy improving for the current opponent. If we consider the policy learning against a specific opponent as a task, our goal can be considered as training a policy on various tasks so that it can efficiently adapt to a good policy on a new task with few training samples. This is exactly a meta-learning problem. We employ Model-Agnostic MetaLearning (MAML)(Finn et al., 2017) to conduct meta-learning. Rabinowitz et al. (2018) applied meta-learning to understand opponents, but this work doesn’t address the policy improvement for the agent to be trained. We apply meta-learning to opponent modeling and policy learning separately while training the two meta-learners jointly. Then we use the meta-learners to initialize the model and policy for the new opponent. Experimental results show that the agent can adapt to the new opponent with a small number of interactions with the opponent. 2 PRELIMINARY In this section, we introduce some preliminaries of our work. We formalize our task based on stochastic games(Littman, 1994), which is a general framework for multi-agent problems, and Bayesian games(Harsanyi, 1967), which formalize the incomplete information of players. Next we introduce Model-Agnostic Meta-Learning (MAML), a meta-learning algorithm that is applicable to various gradient-based methods. Our approach employs MAML as the meta-learning method to train across tasks. 2.1 FORMALIZATION We first introduce stochastic games. Formally, a stochastic game, with N players, is defined as 〈N,S, {A1, ..., AN}, T, {R1, ..., RN}〉, where S is the set of states, Ai is the action set for player i, Ri : S×A1×...×AN → ∆([0, 1]) is the reward function for player i (∆(C) denotes the probability distributions over a set C), and T : S × A1 × ... × AN → ∆(S) is the transition function. The goal for player i is to maximize its own cumulative reward E[ ∑T t=0 γ tri,t], where ri,t is the reward player i gets at time step t and γ is the discounting factor. πi : S → ∆(Ai) denotes the policy for player i, which maps each state to a distribution over actions. Our agent takes the role of one player. With out loss of generality, we assume that our agent is player 1. In this paper we only consider two-player stochastic games. Bayesian games further introduce types for players. Since our agent takes the role of player 1, the different types of player 1 can be considered as different states. We only consider that player 2 has a set of types Θ. A specific opponent playing player 2 has its own θ ∈ Θ which is unknown to player 1. There exists a prior distribution p(θ) over the population of opponents. Under this setting, each θ ∈ Θ has its corresponding reward function Ri(s, a1, a2, θ) for each i ∈ {1, 2}, s ∈ S, a1 ∈ A1 and a2 ∈ A2. Therefore combining the above concepts we formalize our tasks as Bayesian stochastic games 〈N,S, {A1, ..., AN}, T, {R1, ..., RN},Θ, p〉, where R1, ..., RN are the modified reward functions dependent on θ ∈ Θ and p is the prior distribution over Θ for types of player 2. 2.2 MODEL-AGNOSTIC META-LEARNING Meta-learning considers the goal that aims to quickly train a model for a new task with the help of data on many similar tasks. Formally, we denote {T }N1i=1 the given N1 tasks used for training. Then N2 more new tasks {T }N1+N2i=N1+1 are used for testing. In the meta-learning setting, we assume that the tasks are sampled from a distribution p(T ) over all possible tasks. For each task Ti, we want to identify a mapping fi, that maps each input x to its corresponding output y. Model-Agnostic Meta-Learning (MAML) is one of the best meta-learning algorithms that can be applied to models that are trained with gradient descent. Denote the parameter for each fi as ψ′i. The loss for fψ′i on task Ti is denoted as LTi(fψ′i). A meta-learner with parameter ψ is used as the initialization for all ψ′is and the update for each specific task is: ψ′i = ψ − α∇ψLTi(fψ), where α is the learning rate. The update for ψ is: ψ = ψ − β∇ψ N1∑ i=1 LTi(fψ′i), where β is the learning rate for ψ. 3 OUR APPROACH Before we dive into the Bayesian stochastic games, we first consider the stochastic game that we have a specific opponent with type θ ∈ Θ. We aim to explicitly model the opponent so that we can get predictions about it. The predicted value should be some character of the opponent that can help our agent improve policy. For example, in games where our reward is related to the final goal of the opponent, we can directly predict its goal. Formally, at some state st ∈ S, our agent predicting some estimated value ṽ(·|st, θ), where ṽ is the estimation of value v. v represent a character of the opponent such as goals, next actions or next positions. For convenience, we use vθ to denote v(·|θ). Then our agent can choose an action according to π1(·|st, ṽθ(st)), while an agent unaware of its opponent can only take action according to π1(·|st). Thus, the task has been divided into two subtasks: modeling ṽθ of the opponent and learning a policy π1(·|ṽθ). The latter task can be considered as a general RL problem and we can apply various RL methods to solve it. Now assume that opponents have a prior distribution for their types. That is, an opponent with type θ ∈ Θ can be treated as a sample following θ ∼ p(θ). If we can collect data from some opponents sampled from p, it is possible we can generalize the model and policy to the game with a new opponent. Thus we can consider the opponent modeling and policy learning as two metalearning tasks. The former can be considered as an imitation learning or supervised learning task while the second is a RL task. Both can apply MAML for meta learning. Since the learned policy needs the model to make prediction, we cannot train the two meta-tasks independently. We jointly training the model and policy with some given opponents. We call our method as Meta-OpponentAgent learning (MOA) to indicate that the model and policy are jointly trained via meta-learning. Finally when a new opponent comes, we initialize our model and policy with the meta-model and meta-policy. The training procedure is shown in the figure 1. 3.1 OPPONENT MODELING The opponent in our game is considered as some player that won’t adapt its policy to our agent. This assumption can be true in many practical situations. For example, the consumers of a business man usually have stable preferences. Then we further assume that there exist a distribution over the policies of the opponents. This assumption can also be considered true for the business man situation, where a specific consumer is just a sample from the whole population. Our goal here is to model the current opponent with the help of data that are collected by playing with other opponents. Formally, we now aim to model the value function v for the opponent from the observations. As mentioned above, v here can represent many forms of characters of the opponent, such as the final goal, next action, next position or any other character we wish to predict. For each type θ ∼ p, the value function is specified as vθ. Now we have M opponents sampled from the population. Opponent i has a type θi ∼ p(θ). For any given state s as an input for the opponent i, the model outputs ṽθi(s). The task of minimizing ṽθi and vθi are treated as task T oi = {(s, vθi(s)} with loss function LT oi = dist(vθi , ṽθi), where dist(·, ·) is the distance metric for two v functions. The distance function can vary for different problems. Following the framework of MAML, we use a network called opponent network (OPNet) f with parameter ψ to learn the function ṽ. Then for each opponent, we collect data Si = {(s, vθi(s)} and then use this dataset to update ψ to get the adapted parameter ψ′i. New dataset S ′ i = {(s, vθi(s)} is collected with fψ′i . {S ′ i}Mi=1 are together used to update ψ. Finally we use the learned ψ to initialize the parameter for the current opponent as fψ . The updating process for parameters follows the framework of MAML. Following Grant et al. (2018), we are learning an empirical Bayesian model over the opponent population after training with the M opponents. When data for the new opponent are observed, it is easier to adapt to the new opponent with such a prior model. 3.2 POLICY LEARNING FOR THE AGENT With a model of the opponent, our agent can give a better policy than the policy unaware of the opponent. The policy learning for our agent is also trained by a meta-learning process, similar to the above opponent modeling process. To distinguish the notation for opponent modeling, we use φ to represent the parameter of agent’s policy. The agent can employ various RL methods to learn the policy. We use the Dueling DQN (Wang et al., 2016) as our learning method. We use gφ to denote the Dueling DQN mapping with parameter φ. The M opponents bring M meta-training tasks {T ai }Mi=1. For opponent i, at state st, OPNet predict a value fψ(st). The policy πai of our agent is defined on the new state (st, fψ(st)) and the action at the agent chooses is sampled from πai (st, fψ(st)). Then the agent gets a immediate reward rt. We collect a dataset Di = {(st, fψ(st), at, rt)} for the task T ai . Similar to the above part, we update φ with Di to get adapted parameter φ′i. Next, we use fψ′i and γφ′i to get dataset D ′ i = {(st, fψ′i(st), at, rt). Parameter φ is in return updated with {D′i}Mi=1. When finally the agent meet the new opponent, it uses gθ as the initialization for its policy and improve its policy. From the Bayesian point of view, we can consider the learned gφ is the approximated Bayesian Optimal Policy against the opponent distribution. When we meet a new opponent, we initialize the policy with gφ to accelerate the learning by guiding the agent to explore the potential direction. 3.3 ALGORITHM We train fψ and gφ jointly, since the prediction of the opponent is considered as part of the input for the agent. More concretely, for each iteration, our agent play with each opponent i ∈ [M ]. Our agent use the OPNet to predict the opponent, and the Dueling DQN use the prediction as part of its input to give a policy. Then both OPNet and Dueling DQN are updated. The algorithm is shown in Algorithm 1. 4 RELATED WORK Many works concentrate on multi-agent Reinforcement learning tasks. Works like Lanctot et al. (2017) connect these tasks with game theory to modify general RL methods to multi-agent scenes. Some works aims to solve equilibriums for specific games. For example, Silver et al. (2017) proposes a self-play deep RL method for two-player zero-sum perfect-information games and their work on Go outperforms human beings. Some other researches aims to learn policies for agents via Algorithm 1 Meta-Opponent-Agent Learning Require: p(θ) over types of opponents Initialization: Get M opponent samples T oi , each with type θi ∼ p(θ). Initialize fψ , gφ. repeat for all T oi do Agent use fψ and gφ to play with opponent i. Collect data Si = {(st, vθi(st))} and Di = {(st, fψ(st), at, rt)}. Evaluate∇ψLT oi (fψ) and ∇φLT ai (gφ) using Si. Compute parameters ψ′i = ψ−α∇ψLT oi (fψ) with Si and φ ′ i = φ−α∇φLT ai (gφ) with Di. Agent use fψ′i and gφ′i to play with opponent i. Collect data S ′ i = {(s,t , vθi(st))} and Di = {(st, fψ′i(st), at, rt)}. end for Update ψ = ψ − β∇ψ ∑ T oi LT oi (fψ′i) with S ′ i and φ = φ− β∇φ ∑ T ai LT ai (fφ′i) with D ′ i until Done imitation learning (Oh et al., 2014; Thurau et al., 2004). These works just wish to identify good polices and don’t aim to exploit specific opponents. There are also some works address opponent modeling. Raileanu et al. (2018) proposes a method to automatically infer the goal of others by using agent itself. However, this work is not suitable for games that are not goal-directed. What’s more, concentrating on specific opponents can lead to a weak policy against other opponents. Works like Johanson et al. (2008); Ganzfried & Sandholm (2015) attempt to model specific opponents while learning a robust policy. These works address the problem from the game-theoretical point of view. They don’t have assumptions on opponents. If we consider the opponent population and assume that there are some prior distribution over the policies of opponents, we are in fact easy to infer information for the current opponent with the help of other opponents. Rabinowitz et al. (2018) connects opponent modeling with theory of mind, a concept from psychology. This work use meta-learning to build flexible and sample efficient models for opponents. However, it ignores the the process for policy learning. Our work attempts to gain information for both opponent modeling and policy improving from the given opponents. 5 EXPERIMENTS In this section, we test our method on three different kinds of two-player games, each with some specific uncertainty: • Chasing game: a game where the opponent has a private type set with finite elements; • Blocking game: a game where the opponent has a private type set with infinite elements; • Recommending game: a game where the opponent has private type set with infinite ele- ments and the agent has random reward functions. All these games are grid games where both player 1 and 2 choose a one-step direction as its action. In the grid world, each action can only move to the grid next to it or stay at its current position for one step. Since all these games are based on grid worlds, we choose the value function for the opponent as the goals or the next position of the opponent. Thus, we choose cross entropy as dist(vθ, ṽθ) in section 3.1. For each game, we test our method MOA and three other baseline methods. We first introduce these baselines and show the experimental results for each game. Further, in the design of games, we don’t set the rewards bounded by [0, 1]. We make rewards larger to make it easy to train. 5.1 BASELINE METHODS Meta-Opponent (MO): For this method, we only train the model for opponents when playing with the training opponents. Then we use this model to initialize our model for the new opponent and our agent directly learn from the beginning. More concretely, we only train the parameter ψ and then train ψ′M+1 and φ ′ M+1 for the new opponent M + 1. This method is used to show that whether training the agent via MAML can help the agent more efficiently adapt to a new opponent. Meta-agent without model (MA): We don’t model the opponents in this method. Dueling DQN is used to directly learn πa(·|s) for s ∈ S via MAML. This method is used to show the effect of models. No Meta-Learning (NM): To demonstrate that meta-learning can do take advantage of the information from other opponents and learn a good policy with fewer samples, we directly train the model and the agent for the new coming opponent. 5.2 CHASING GAME In the chasing game, a grid board of size 8×8 is given. as shown in figure 2a. Player 1 is represented as the red grid and player 2 is the green one. Player 2 has a private goal, which can be considered as its specific type and is unknown to player 1. The goal is one specific gird on the map. Each player 2 has a specific goal, while the player 2 population have a distribution over the 64 grids. That is, the set of types is a finite set. In this chasing game, the grids that are close to the top left corner are preferred. The probability of the goal’s location over the map is visualize in figure 2b. The rule for the game is as follows. Both players takes actions simultaneously. One game lasts for at least 15 steps. When a game begin, player 2 will go directly to its goal and stops there. The only way for player 1 to get rewards is to chase player 2 to its goal before the game ends. If player 1 finds player 2, it gets a reward of 10 and the game ends. If it is at one of the 8 neighborhood grids of player 2 at the end of the game, it gets a reward 5. Otherwise, it has reward 0. We test the four methods, MOA, MA, MO and NM. For MOA, MA and MO, they all require the meta-training process. 20 opponents are sampled as the meta tasks. Each method trains 800 iterations to get the meta learners and use them to initialize their networks. Then 10 new opponents are sampled as testing tasks. Four methods all train 4000 games for each testing task. We compare their performance along the testing process by averaging the rewards on the 10 testing tasks. In this game, the type for player 2 is its goal and we directly model the goal for player 2. Figure 2c gives the results of four methods during the testing process. We plot the average rewards over 10 opponents. It is easy to see that MOA outperforms the other methods. Notice that the reward trend for MOA first drops and then raises as the testing process goes on. This shows the process that the meta-learner adapt to the current task. Intuitively, the meta-model would first update itself to the current opponent. Then the meta-policy would improve itself to fit the model. NM learns the testing task without meta-training for neither the model nor the policy. It can still improve its policy but cost many more games than MOA. The comparison of MOA and NM shows that we can gain benefits by training across opponents via meta-learning. MO method just train the meta-model. The result shows that MO just performs similar to NM. This result show that simply training the model of opponent cannot help improve efficiency. MA performs the worst among all these methods, even though it has a meta-training process. This is because it doesn’t build a model for the opponent. Ignoring the existence of the opponent just results in the failure to improve policy. 5.3 BLOCKING GAME In blocking games, as shown in figure 3a, has a 9*7 size map. In the initial state, player 1 is the red grid and player 1 is the green one. The goal for player 2 is to pass one of the five ways to reach the top two rows, while the goal of the player 1 is to block player 2 to reach the goal area. There are 5 paths that player 2 can pass to get the goal area and the type for player 2 is the probability of choosing each path. Thus the type set for player 2 is a simplex, which has infinite elements. Each path has only one exit. If player 1 can block player 2 at the exit, player 1 gets a reward 10. Otherwise player 2 will pass the exit and reward 1 will get -10. The training setting for blocking games is a bit different from chasing games. Only 15 opponents are sampled as the meta tasks. Each opponent has a distribution over 5 path and it samples one path for one game. The prior distribution for opponent’s type is the Dirichlet distribution with five 0.5 as parameters. Each method trains 800 iterations to get the meta-parameters. Then 10 new opponents are sampled as testing tasks. Four methods all train 4000 games. In this game, it is hard to directly model the types of opponents as the value thus we simply choose its next position as the value. Figure 3b shows the performance of MA and MOA along the meta training process. After 50 iterations, we collect the rewards of our agent gets with the 15 training opponents. Since our agent plays a random policy, we average the rewards against each opponent of 100 games. Notice that MO only has its model trained in the training process, we don’t test it. The result shows that MOA can improve itself quickly while MA can hardly improve. This again demonstrate the importance of opponent modeling. Figure 3c gives the rewards along testing process. It is easy to see that MOA can use less than 500 games to adapt to the new opponents while MO and NM improves slowly. Again MO and NM performs similarly. The results are similar to that of chasing games. 5.4 RECOMMENDING GAME A recommending game, as shown in figure 4a, has a 7*7 size map. Player 1 is red and player 2 is green. There are 4 blue grids on the left of the map, which are goals for player 2. There are also 4 purple girds on the right of the map, which are objects for player 1. This game is similar to the process that a business man recommends goods for his current consumer. In this game, player 2 also has a private distribution over the 4 goals. The distribution is considered as the type and the prior distribution is a Dirichlet distribution with four 0.5 as parameters. Player 2 samples a goal from its type distribution and goes to its goal directly. Player 1 needs to recommend one of the 4 purple objects to player 2. When player 1 reaches one of the objects or the game is played 16 steps, the game ends. Player 1 only gets rewards when it reaches an object. Assume that the vertical coordinate for the goal of player 2 is y2 and that of the recommended object is y1. Then the reward for player 1 is a sample from Gaussian distribution N (µ, 1), where µ = 10− 3/2 ∗ |y1 − y2|. Details of experiments is almost the same as blocking games, except that we choose player 2’s goal as its predicting value. Figure 4b demonstrate again that MOA performs well during meta-training. Figure 4c shows that MOA outperforms the other three methods during the testing process. MOA is indeed sample-efficient. The results for recommending games is similar to chasing games and blocking games. The random rewards just bring more variance to the training and testing process. Finally we test all the four methods on 100 opponents sampled from the opponent distribution. For each opponent, we just play one game with it. That is, we don’t conduct any learning process for new opponents. The results for all three games are given in the table 1. As the table shows, MOA can reaches relatively high rewards while other methods performs bad. The results demonstrate that MOA can gain prior information from meta-learning process. 6 CONCLUSION In the face of other agents, it is beneficial to build models for opponents and find a corresponding good policy. This method can be sample-inefficient since it costs many observations to build models and learn a policy then. We propose a method that can employ the information learned from experiences with other opponents to speed up the learning process for the current opponents. This method is suitable for many practical situations where the opponent population has a relative stable distribution over their policies. We apply meta-learning to jointly train the opponent modeling and policy improving process. Experimental results show that our method can be sample-efficient.
1. What is the main contribution of the paper regarding multi-agent learning? 2. How does the proposed approach differ from prior works in terms of opponent modeling? 3. What are the limitations of the experimental design and how do they impact the results? 4. Can the approach be applied to more complex tasks and opponent modeling scenarios? 5. What is the significance of the drop in reward in the MOA method during the testing process? 6. How does the baseline method perform compared to random play, and what does this indicate about the opponents' strategies? 7. What is the maximum achievable reward in the blocking game, and how does it relate to the performance of the MOA method?
Review
Review The paper presents an approach to multi-agent learning based on the framework of model-agnostic meta learning. The originality of the approach lies in the decomposition of the policy in two terms, with applications to opponent modeling: the first part of the policy tries to predict some important characteristic of the agent (the characteristic itself is prior knowledge, the value it takes for a particular opponent is learnt from observations). The second part of the policy takes the estimated characteristic of the opponent as input, the current state and produces the action. All networks are trained within the MAML framework. The overall approach is motivated by the task of opponent modeling for multi-agent RL. The approach makes sense overall -- the "value" of the opponent is valuable prior knowledge. The originality is limited though. In this kind of paper, I would expect the experiments to make a strong case for the approach. Unfortunately, the experiments are extremely toyish and admittedly not really "multi-agent": the "opponent" has a fixed strategy that does not depend on what the current is doing (it is therefore not really an opponent). The experimental protocol is more akin to multitask RL than multi-agent RL, and it is unclear whether the approach could/should work for opponent modeling even on tasks of low complexity. In other words, the experimental section does not address the problem that is supposed to be addressed (opponent modeling). other comments: - "The opponent in our game is considered as some player that won’t adapt its policy to our agent." -> in the experiments it is worse than that: the opponents actions do not even depend on what the agent is doing... So admittedly the experiments are not really "multi-agent" (or "multi-agent" where the "opponent" is totally independent of what the agent is currently doing). - "Each method trains 800 iterations to get the meta learners and use them to initialize their networks. Then 10 new opponents are sampled as testing tasks. Four methods all train 4000 games for each testing task." -> what does 800 iterations mean? Does it mean 800 episodes (it would seem strange for a "fast adaptation task" to have fewer episodes for training than for testing). - "Notice that the reward trend for MOA first drops and then raises as the testing process goes on. This shows the process that the meta-learner adapt to the current task." -> the adaptation to the new opponent does not really explain the drop? - Figure 3(c): the MA baseline has a reward of ~-10, which is worse than random (a uniform random placement at the 5 strategic positions would get 10*1/5-10*4/5 = -6). On the other hand, MOA achieves very high rewards, which indicates that the "opponents" strategies have low entropy. What is the best achievable reward on the blocking game?
ICLR
Title Sample-efficient policy learning in multi-agent Reinforcement Learning via meta-learning Abstract To gain high rewards in muti-agent scenes, it is sometimes necessary to understand other agents and make corresponding optimal decisions. We can solve these tasks by first building models for other agents and then finding the optimal policy with these models. To get an accurate model, many observations are needed and this can be sample-inefficient. What’s more, the learned model and policy can overfit to current agents and cannot generalize if the other agents are replaced by new agents. In many practical situations, each agent we face can be considered as a sample from a population with a fixed but unknown distribution. Thus we can treat the task against some specific agents as a task sampled from a task distribution. We apply meta-learning method to build models and learn policies. Therefore when new agents come, we can adapt to them efficiently. Experiments on grid games show that our method can quickly get high rewards. 1 INTRODUCTION Applying Reinforcement Learning (RL) to multi-agent scenes requires carefully consideration about the influence of other agents. We cannot simply treat other agents as part of the environment and apply independent RL methods (Lanctot et al., 2017) if the actions of them has impact on the payoff of the agent to be trained. For example, consider the two-person ultimatum bargaining game, where two players take part in. One player propose a deal to split a fixed amount of money for them two and the other player decides to accept it or not. If the second player accepts the proposal, they split the money, but if the proposal is refused, they both get zero. Experimental results (Güth et al., 1982) show that in actual life, the second player makes the decision according to whether he or she judge the final result fair, rather than makes the obvious rational decision. Thus, the first player needs to predict how the second player will react so as to make the proposal acceptable. In order to exploit the other agents and find the corresponding optimal policy, we need to understand these agents. Here in this paper, we call all the other agents “opponents” to distinguish our agent from them, even if they may have cooperative relationship with our agent. For simplicity, we only consider tasks with only one opponent. Extension to tasks with more opponents is straightforward. A general way to exploit an opponent is to build a model for it from observations. This model can characterize any needed feature of the opponent, such as next action or the final goal. Such a model can make predictions for the opponent and thus turns the two-agent task into a simple-agent decision making problem. Then we can apply various RL methods to solve this problem. It is necessary that we need to have an accurate model for the opponent to help make decision. Previous works (He et al., 2016; Raileanu et al., 2018) propose some methods to model the opponent. Generally, it requires many observations to get a precise model for the opponent. This may cost many iterations to act with the opponent. What’s more, even if we can precisely model the opponent, there exists a main drawback of above process that the performance of the learned policy has no guarantee for any other opponent. Things are even worse if opponents have their private types which are unknown for us. New opponents with different types can have different policies or even different payoffs. Therefore, it seems that when a new opponent came, we have to learn a policy from the beginning. In some practical situations, the whole opponents follow a distributions over all these possible types. Let’s come back to the ultimatum bargaining game. Bahry & Wilson (2006) shows that people with different ethnicity may have different standards for fairness. Thus if we assume the type for player 2 to be its judgment for fairness, there can be a distribution for types dependent on the ethnic distribution. Given that opponents follows a distribution, it is possible that we can employ some given opponents to help us speed up the process of opponent modeling and policy improving for the current opponent. If we consider the policy learning against a specific opponent as a task, our goal can be considered as training a policy on various tasks so that it can efficiently adapt to a good policy on a new task with few training samples. This is exactly a meta-learning problem. We employ Model-Agnostic MetaLearning (MAML)(Finn et al., 2017) to conduct meta-learning. Rabinowitz et al. (2018) applied meta-learning to understand opponents, but this work doesn’t address the policy improvement for the agent to be trained. We apply meta-learning to opponent modeling and policy learning separately while training the two meta-learners jointly. Then we use the meta-learners to initialize the model and policy for the new opponent. Experimental results show that the agent can adapt to the new opponent with a small number of interactions with the opponent. 2 PRELIMINARY In this section, we introduce some preliminaries of our work. We formalize our task based on stochastic games(Littman, 1994), which is a general framework for multi-agent problems, and Bayesian games(Harsanyi, 1967), which formalize the incomplete information of players. Next we introduce Model-Agnostic Meta-Learning (MAML), a meta-learning algorithm that is applicable to various gradient-based methods. Our approach employs MAML as the meta-learning method to train across tasks. 2.1 FORMALIZATION We first introduce stochastic games. Formally, a stochastic game, with N players, is defined as 〈N,S, {A1, ..., AN}, T, {R1, ..., RN}〉, where S is the set of states, Ai is the action set for player i, Ri : S×A1×...×AN → ∆([0, 1]) is the reward function for player i (∆(C) denotes the probability distributions over a set C), and T : S × A1 × ... × AN → ∆(S) is the transition function. The goal for player i is to maximize its own cumulative reward E[ ∑T t=0 γ tri,t], where ri,t is the reward player i gets at time step t and γ is the discounting factor. πi : S → ∆(Ai) denotes the policy for player i, which maps each state to a distribution over actions. Our agent takes the role of one player. With out loss of generality, we assume that our agent is player 1. In this paper we only consider two-player stochastic games. Bayesian games further introduce types for players. Since our agent takes the role of player 1, the different types of player 1 can be considered as different states. We only consider that player 2 has a set of types Θ. A specific opponent playing player 2 has its own θ ∈ Θ which is unknown to player 1. There exists a prior distribution p(θ) over the population of opponents. Under this setting, each θ ∈ Θ has its corresponding reward function Ri(s, a1, a2, θ) for each i ∈ {1, 2}, s ∈ S, a1 ∈ A1 and a2 ∈ A2. Therefore combining the above concepts we formalize our tasks as Bayesian stochastic games 〈N,S, {A1, ..., AN}, T, {R1, ..., RN},Θ, p〉, where R1, ..., RN are the modified reward functions dependent on θ ∈ Θ and p is the prior distribution over Θ for types of player 2. 2.2 MODEL-AGNOSTIC META-LEARNING Meta-learning considers the goal that aims to quickly train a model for a new task with the help of data on many similar tasks. Formally, we denote {T }N1i=1 the given N1 tasks used for training. Then N2 more new tasks {T }N1+N2i=N1+1 are used for testing. In the meta-learning setting, we assume that the tasks are sampled from a distribution p(T ) over all possible tasks. For each task Ti, we want to identify a mapping fi, that maps each input x to its corresponding output y. Model-Agnostic Meta-Learning (MAML) is one of the best meta-learning algorithms that can be applied to models that are trained with gradient descent. Denote the parameter for each fi as ψ′i. The loss for fψ′i on task Ti is denoted as LTi(fψ′i). A meta-learner with parameter ψ is used as the initialization for all ψ′is and the update for each specific task is: ψ′i = ψ − α∇ψLTi(fψ), where α is the learning rate. The update for ψ is: ψ = ψ − β∇ψ N1∑ i=1 LTi(fψ′i), where β is the learning rate for ψ. 3 OUR APPROACH Before we dive into the Bayesian stochastic games, we first consider the stochastic game that we have a specific opponent with type θ ∈ Θ. We aim to explicitly model the opponent so that we can get predictions about it. The predicted value should be some character of the opponent that can help our agent improve policy. For example, in games where our reward is related to the final goal of the opponent, we can directly predict its goal. Formally, at some state st ∈ S, our agent predicting some estimated value ṽ(·|st, θ), where ṽ is the estimation of value v. v represent a character of the opponent such as goals, next actions or next positions. For convenience, we use vθ to denote v(·|θ). Then our agent can choose an action according to π1(·|st, ṽθ(st)), while an agent unaware of its opponent can only take action according to π1(·|st). Thus, the task has been divided into two subtasks: modeling ṽθ of the opponent and learning a policy π1(·|ṽθ). The latter task can be considered as a general RL problem and we can apply various RL methods to solve it. Now assume that opponents have a prior distribution for their types. That is, an opponent with type θ ∈ Θ can be treated as a sample following θ ∼ p(θ). If we can collect data from some opponents sampled from p, it is possible we can generalize the model and policy to the game with a new opponent. Thus we can consider the opponent modeling and policy learning as two metalearning tasks. The former can be considered as an imitation learning or supervised learning task while the second is a RL task. Both can apply MAML for meta learning. Since the learned policy needs the model to make prediction, we cannot train the two meta-tasks independently. We jointly training the model and policy with some given opponents. We call our method as Meta-OpponentAgent learning (MOA) to indicate that the model and policy are jointly trained via meta-learning. Finally when a new opponent comes, we initialize our model and policy with the meta-model and meta-policy. The training procedure is shown in the figure 1. 3.1 OPPONENT MODELING The opponent in our game is considered as some player that won’t adapt its policy to our agent. This assumption can be true in many practical situations. For example, the consumers of a business man usually have stable preferences. Then we further assume that there exist a distribution over the policies of the opponents. This assumption can also be considered true for the business man situation, where a specific consumer is just a sample from the whole population. Our goal here is to model the current opponent with the help of data that are collected by playing with other opponents. Formally, we now aim to model the value function v for the opponent from the observations. As mentioned above, v here can represent many forms of characters of the opponent, such as the final goal, next action, next position or any other character we wish to predict. For each type θ ∼ p, the value function is specified as vθ. Now we have M opponents sampled from the population. Opponent i has a type θi ∼ p(θ). For any given state s as an input for the opponent i, the model outputs ṽθi(s). The task of minimizing ṽθi and vθi are treated as task T oi = {(s, vθi(s)} with loss function LT oi = dist(vθi , ṽθi), where dist(·, ·) is the distance metric for two v functions. The distance function can vary for different problems. Following the framework of MAML, we use a network called opponent network (OPNet) f with parameter ψ to learn the function ṽ. Then for each opponent, we collect data Si = {(s, vθi(s)} and then use this dataset to update ψ to get the adapted parameter ψ′i. New dataset S ′ i = {(s, vθi(s)} is collected with fψ′i . {S ′ i}Mi=1 are together used to update ψ. Finally we use the learned ψ to initialize the parameter for the current opponent as fψ . The updating process for parameters follows the framework of MAML. Following Grant et al. (2018), we are learning an empirical Bayesian model over the opponent population after training with the M opponents. When data for the new opponent are observed, it is easier to adapt to the new opponent with such a prior model. 3.2 POLICY LEARNING FOR THE AGENT With a model of the opponent, our agent can give a better policy than the policy unaware of the opponent. The policy learning for our agent is also trained by a meta-learning process, similar to the above opponent modeling process. To distinguish the notation for opponent modeling, we use φ to represent the parameter of agent’s policy. The agent can employ various RL methods to learn the policy. We use the Dueling DQN (Wang et al., 2016) as our learning method. We use gφ to denote the Dueling DQN mapping with parameter φ. The M opponents bring M meta-training tasks {T ai }Mi=1. For opponent i, at state st, OPNet predict a value fψ(st). The policy πai of our agent is defined on the new state (st, fψ(st)) and the action at the agent chooses is sampled from πai (st, fψ(st)). Then the agent gets a immediate reward rt. We collect a dataset Di = {(st, fψ(st), at, rt)} for the task T ai . Similar to the above part, we update φ with Di to get adapted parameter φ′i. Next, we use fψ′i and γφ′i to get dataset D ′ i = {(st, fψ′i(st), at, rt). Parameter φ is in return updated with {D′i}Mi=1. When finally the agent meet the new opponent, it uses gθ as the initialization for its policy and improve its policy. From the Bayesian point of view, we can consider the learned gφ is the approximated Bayesian Optimal Policy against the opponent distribution. When we meet a new opponent, we initialize the policy with gφ to accelerate the learning by guiding the agent to explore the potential direction. 3.3 ALGORITHM We train fψ and gφ jointly, since the prediction of the opponent is considered as part of the input for the agent. More concretely, for each iteration, our agent play with each opponent i ∈ [M ]. Our agent use the OPNet to predict the opponent, and the Dueling DQN use the prediction as part of its input to give a policy. Then both OPNet and Dueling DQN are updated. The algorithm is shown in Algorithm 1. 4 RELATED WORK Many works concentrate on multi-agent Reinforcement learning tasks. Works like Lanctot et al. (2017) connect these tasks with game theory to modify general RL methods to multi-agent scenes. Some works aims to solve equilibriums for specific games. For example, Silver et al. (2017) proposes a self-play deep RL method for two-player zero-sum perfect-information games and their work on Go outperforms human beings. Some other researches aims to learn policies for agents via Algorithm 1 Meta-Opponent-Agent Learning Require: p(θ) over types of opponents Initialization: Get M opponent samples T oi , each with type θi ∼ p(θ). Initialize fψ , gφ. repeat for all T oi do Agent use fψ and gφ to play with opponent i. Collect data Si = {(st, vθi(st))} and Di = {(st, fψ(st), at, rt)}. Evaluate∇ψLT oi (fψ) and ∇φLT ai (gφ) using Si. Compute parameters ψ′i = ψ−α∇ψLT oi (fψ) with Si and φ ′ i = φ−α∇φLT ai (gφ) with Di. Agent use fψ′i and gφ′i to play with opponent i. Collect data S ′ i = {(s,t , vθi(st))} and Di = {(st, fψ′i(st), at, rt)}. end for Update ψ = ψ − β∇ψ ∑ T oi LT oi (fψ′i) with S ′ i and φ = φ− β∇φ ∑ T ai LT ai (fφ′i) with D ′ i until Done imitation learning (Oh et al., 2014; Thurau et al., 2004). These works just wish to identify good polices and don’t aim to exploit specific opponents. There are also some works address opponent modeling. Raileanu et al. (2018) proposes a method to automatically infer the goal of others by using agent itself. However, this work is not suitable for games that are not goal-directed. What’s more, concentrating on specific opponents can lead to a weak policy against other opponents. Works like Johanson et al. (2008); Ganzfried & Sandholm (2015) attempt to model specific opponents while learning a robust policy. These works address the problem from the game-theoretical point of view. They don’t have assumptions on opponents. If we consider the opponent population and assume that there are some prior distribution over the policies of opponents, we are in fact easy to infer information for the current opponent with the help of other opponents. Rabinowitz et al. (2018) connects opponent modeling with theory of mind, a concept from psychology. This work use meta-learning to build flexible and sample efficient models for opponents. However, it ignores the the process for policy learning. Our work attempts to gain information for both opponent modeling and policy improving from the given opponents. 5 EXPERIMENTS In this section, we test our method on three different kinds of two-player games, each with some specific uncertainty: • Chasing game: a game where the opponent has a private type set with finite elements; • Blocking game: a game where the opponent has a private type set with infinite elements; • Recommending game: a game where the opponent has private type set with infinite ele- ments and the agent has random reward functions. All these games are grid games where both player 1 and 2 choose a one-step direction as its action. In the grid world, each action can only move to the grid next to it or stay at its current position for one step. Since all these games are based on grid worlds, we choose the value function for the opponent as the goals or the next position of the opponent. Thus, we choose cross entropy as dist(vθ, ṽθ) in section 3.1. For each game, we test our method MOA and three other baseline methods. We first introduce these baselines and show the experimental results for each game. Further, in the design of games, we don’t set the rewards bounded by [0, 1]. We make rewards larger to make it easy to train. 5.1 BASELINE METHODS Meta-Opponent (MO): For this method, we only train the model for opponents when playing with the training opponents. Then we use this model to initialize our model for the new opponent and our agent directly learn from the beginning. More concretely, we only train the parameter ψ and then train ψ′M+1 and φ ′ M+1 for the new opponent M + 1. This method is used to show that whether training the agent via MAML can help the agent more efficiently adapt to a new opponent. Meta-agent without model (MA): We don’t model the opponents in this method. Dueling DQN is used to directly learn πa(·|s) for s ∈ S via MAML. This method is used to show the effect of models. No Meta-Learning (NM): To demonstrate that meta-learning can do take advantage of the information from other opponents and learn a good policy with fewer samples, we directly train the model and the agent for the new coming opponent. 5.2 CHASING GAME In the chasing game, a grid board of size 8×8 is given. as shown in figure 2a. Player 1 is represented as the red grid and player 2 is the green one. Player 2 has a private goal, which can be considered as its specific type and is unknown to player 1. The goal is one specific gird on the map. Each player 2 has a specific goal, while the player 2 population have a distribution over the 64 grids. That is, the set of types is a finite set. In this chasing game, the grids that are close to the top left corner are preferred. The probability of the goal’s location over the map is visualize in figure 2b. The rule for the game is as follows. Both players takes actions simultaneously. One game lasts for at least 15 steps. When a game begin, player 2 will go directly to its goal and stops there. The only way for player 1 to get rewards is to chase player 2 to its goal before the game ends. If player 1 finds player 2, it gets a reward of 10 and the game ends. If it is at one of the 8 neighborhood grids of player 2 at the end of the game, it gets a reward 5. Otherwise, it has reward 0. We test the four methods, MOA, MA, MO and NM. For MOA, MA and MO, they all require the meta-training process. 20 opponents are sampled as the meta tasks. Each method trains 800 iterations to get the meta learners and use them to initialize their networks. Then 10 new opponents are sampled as testing tasks. Four methods all train 4000 games for each testing task. We compare their performance along the testing process by averaging the rewards on the 10 testing tasks. In this game, the type for player 2 is its goal and we directly model the goal for player 2. Figure 2c gives the results of four methods during the testing process. We plot the average rewards over 10 opponents. It is easy to see that MOA outperforms the other methods. Notice that the reward trend for MOA first drops and then raises as the testing process goes on. This shows the process that the meta-learner adapt to the current task. Intuitively, the meta-model would first update itself to the current opponent. Then the meta-policy would improve itself to fit the model. NM learns the testing task without meta-training for neither the model nor the policy. It can still improve its policy but cost many more games than MOA. The comparison of MOA and NM shows that we can gain benefits by training across opponents via meta-learning. MO method just train the meta-model. The result shows that MO just performs similar to NM. This result show that simply training the model of opponent cannot help improve efficiency. MA performs the worst among all these methods, even though it has a meta-training process. This is because it doesn’t build a model for the opponent. Ignoring the existence of the opponent just results in the failure to improve policy. 5.3 BLOCKING GAME In blocking games, as shown in figure 3a, has a 9*7 size map. In the initial state, player 1 is the red grid and player 1 is the green one. The goal for player 2 is to pass one of the five ways to reach the top two rows, while the goal of the player 1 is to block player 2 to reach the goal area. There are 5 paths that player 2 can pass to get the goal area and the type for player 2 is the probability of choosing each path. Thus the type set for player 2 is a simplex, which has infinite elements. Each path has only one exit. If player 1 can block player 2 at the exit, player 1 gets a reward 10. Otherwise player 2 will pass the exit and reward 1 will get -10. The training setting for blocking games is a bit different from chasing games. Only 15 opponents are sampled as the meta tasks. Each opponent has a distribution over 5 path and it samples one path for one game. The prior distribution for opponent’s type is the Dirichlet distribution with five 0.5 as parameters. Each method trains 800 iterations to get the meta-parameters. Then 10 new opponents are sampled as testing tasks. Four methods all train 4000 games. In this game, it is hard to directly model the types of opponents as the value thus we simply choose its next position as the value. Figure 3b shows the performance of MA and MOA along the meta training process. After 50 iterations, we collect the rewards of our agent gets with the 15 training opponents. Since our agent plays a random policy, we average the rewards against each opponent of 100 games. Notice that MO only has its model trained in the training process, we don’t test it. The result shows that MOA can improve itself quickly while MA can hardly improve. This again demonstrate the importance of opponent modeling. Figure 3c gives the rewards along testing process. It is easy to see that MOA can use less than 500 games to adapt to the new opponents while MO and NM improves slowly. Again MO and NM performs similarly. The results are similar to that of chasing games. 5.4 RECOMMENDING GAME A recommending game, as shown in figure 4a, has a 7*7 size map. Player 1 is red and player 2 is green. There are 4 blue grids on the left of the map, which are goals for player 2. There are also 4 purple girds on the right of the map, which are objects for player 1. This game is similar to the process that a business man recommends goods for his current consumer. In this game, player 2 also has a private distribution over the 4 goals. The distribution is considered as the type and the prior distribution is a Dirichlet distribution with four 0.5 as parameters. Player 2 samples a goal from its type distribution and goes to its goal directly. Player 1 needs to recommend one of the 4 purple objects to player 2. When player 1 reaches one of the objects or the game is played 16 steps, the game ends. Player 1 only gets rewards when it reaches an object. Assume that the vertical coordinate for the goal of player 2 is y2 and that of the recommended object is y1. Then the reward for player 1 is a sample from Gaussian distribution N (µ, 1), where µ = 10− 3/2 ∗ |y1 − y2|. Details of experiments is almost the same as blocking games, except that we choose player 2’s goal as its predicting value. Figure 4b demonstrate again that MOA performs well during meta-training. Figure 4c shows that MOA outperforms the other three methods during the testing process. MOA is indeed sample-efficient. The results for recommending games is similar to chasing games and blocking games. The random rewards just bring more variance to the training and testing process. Finally we test all the four methods on 100 opponents sampled from the opponent distribution. For each opponent, we just play one game with it. That is, we don’t conduct any learning process for new opponents. The results for all three games are given in the table 1. As the table shows, MOA can reaches relatively high rewards while other methods performs bad. The results demonstrate that MOA can gain prior information from meta-learning process. 6 CONCLUSION In the face of other agents, it is beneficial to build models for opponents and find a corresponding good policy. This method can be sample-inefficient since it costs many observations to build models and learn a policy then. We propose a method that can employ the information learned from experiences with other opponents to speed up the learning process for the current opponents. This method is suitable for many practical situations where the opponent population has a relative stable distribution over their policies. We apply meta-learning to jointly train the opponent modeling and policy improving process. Experimental results show that our method can be sample-efficient.
1. What is the main contribution of the paper, and how does it apply MAML to a multi-agent setting? 2. What are the strengths and weaknesses of the proposed approach, particularly regarding technical details and concerns? 3. How does the reviewer assess the clarity and quality of the paper's content, including grammatical mistakes and missing baselines? 4. What are some recent multi-agent algorithms that the author should have compared and contrasted their method with? 5. Do you have any questions regarding the paper's results and experimental design?
Review
Review This paper proposes to apply MAML to a multi-agent setting. In this formulation each opponent corresponds to a task and two separate parts of the policy are learned via meta-learning: 1) the opponent modelling network that predicts the value function for a given opponent based on past actions and states. 2) the policy network which takes in the state and the predicted value function of the opponent. The main concern with this paper is the lack of technical detail and an important missing baseline. The paper also suffers from lacking clarity due to a large number of grammatical mistakes. Technical detail and concerns: The paper mentions Duelling DQN as the RL algorithm in the inner loop. This is very unusual and it's a priori unclear whether MAML with DQN in the inner loop is a sensible algorithm. For example, DQN relies both on a target network and an argmax operator which seem to violate the differentiability requirements needed for MAML regarding higher order gradients. The authors entirely miss this and fail to address possible concerns. The authors also fail to provide any details regarding the exploration scheme used. In fact, a value function is never mentioned, instead the authors talk about a policy pi^a_i, leaving it unclear how this policy is derived from the value function. When the Q-function takes as input the true opponent, there is no need for meta-learning of the policy: Given a known opponent, the tuple (s_t, opponent) defines a Markov state. As far as I could gather from the paper, the authors are missing a baseline which simply learns a single Q-function across all opponents (rather than meta-learning it per opponent) that takes as input the predicted opponent. My expectation is that this is more or less what is happening in the paper. The authors also fail to compare and contrast their method to a number of recent multi-agent algorithms, eg. MADDPG, COMA and LOLA. Furthermore, the results are extremely toy and seem to be for single runs , rendering them insignificant. While the idea itself is interesting, the above concerns render the paper unsuitable for publication in it's current form.
ICLR
Title Experience replay for continual learning Abstract Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions. This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills. We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence. Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience. While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution – that of using experience replay buffers for all past events – with a mixture of onand off-policy learning, leveraging behavioral cloning. We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one. 1 INTRODUCTION Modern day reinforcement learning (RL) has benefited substantially from a massive influx of computational resources. In some instances, the number of data points to feed into RL algorithms has kept in step with computational feasibility. For example, in simulation environments or in self-play RL, it is possible to generate fresh data on the fly. In such settings, the continual learning problem (Ring, 1997) is often ignored because new experiences can be collected on demand, and the start states of the simulation can be controlled. When training on multiple tasks, it is possible to train on all environments simultaneously within the same data batch. As RL is increasingly applied to problems in industry or other real-world settings, however, it is necessary to consider cases, such as robotics, where gathering new experience is expensive or difficult. In such examples, simultaneous training may be infeasible. Instead, an agent must be able to learn from only one task at a time. The time spent on different tasks and the sequence in which those tasks occur are not under the control of the agent. The boundaries between tasks, in fact, will often be unknown – or tasks will deform continuously and not have definite boundaries at all. Such a paradigm for training eliminates the possibility of simultaneously acting upon and learning from several tasks, and leads to the danger of catastrophic forgetting, wherein an agent forgets what it has learned previously when it encounters a new situation. Here, we consider the setting of reinforcement learning where compute and memory resources are large, but the environment is not stationary: this may arise because an RL agent is encountering a task curriculum or sequence of unrelated tasks, engaged in a budgeted physical interaction within a robot, or learning from unstructured interaction with humans. In this setting, the problem of continual learning rears its head: the distribution over experiences is not controlled to facilitate the agent’s maintenance of previously acquired ability. An ideal continual learning system should meet three requirements. First, it should retain previously learned capacities. When a previously encountered task or situation is encountered, performance should immediately be good – ideally as good as it was historically. Second, maintenance of old skills or knowledge should not inhibit further rapid acquisition of a new skill or knowledge. These two simultaneous constraints – maintaining the old while still adapting to the new – represent the challenge known as the stability-plasticity dilemma Grossberg (1982). Third, where possible, a continual learning system should learn new skills that are related to old ones faster than it would have de novo, a property known as constructive interference or positive transfer. We here demonstrate the surprising power of a simple approach: Continual Learning with Experience And Replay (CLEAR). We show that training a network on a mixture of novel experience on-policy and replay experience off-policy allows for both maintenance of performance on earlier tasks and fast adaptation to new tasks. A significant further boost in performance and reduction in catastrophic forgetting is obtained by enforcing behavioral cloning between the current policy and its past self. While memory is rarely severely limited in modern RL, we show that small replay buffers filled with uniform samples from past experiences can be almost as effective as buffers of unbounded size. When comparing CLEAR against state-of-the-art approaches for reducing catastrophic forgetting, we obtain better or comparable results, despite the relative simplicity of our approach; yet, crucially, CLEAR requires no information about the identity of tasks or boundaries between them. 2 RELATED WORK The problem of catastrophic forgetting in neural networks has long been recognized (Grossberg, 1982), and it is known that rehearsing past data can be a satisfactory antidote for some purposes (McClelland, 1998; French, 1999). Consequently, in the supervised setting that is the most common paradigm in machine learning, catastrophic forgetting has been accorded less attention than in cognitive science or neuroscience, since a fixed dataset can be reordered and replayed as necessary to ensure high performance on all samples. In recent years, however, there has been renewed interest in overcoming catastrophic forgetting in RL contexts and in supervised learning from streaming data (Parisi et al., 2018). Current strategies for mitigating catastrophic forgetting have primarily focused on schemes for protecting the parameters inferred in one task while training on another. For example, in Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017), weights important for past tasks are constrained to change more slowly while learning new tasks. The Progressive Networks approach (Rusu et al., 2016) freezes subnetworks trained on individual tasks, and Progress & Compress (Schwarz et al., 2018) uses EWC to consolidate the network after each task has been learned. Kaplanis et al. (2018) treat individual synaptic weights as dynamical systems with latent dimensions / states that protect information. Outside of RL, Zenke et al. (2017) develop a method similar to EWC that maintains estimates of the importance of weights for past tasks, Li & Hoiem (2017) leverage a mixture of task-specific and shared parameters, and Milan et al. (2016) develop a rigorous Bayesian approach for estimating unknown task boundaries. Notably all these methods assume that task identities or boundaries are known, with the exception of Milan et al. (2016), for which the approach is likely not scalable to highly complex tasks. Rehearsing old data via experience replay buffers is a common technique in RL. However, their introduction has primarily been driven by the goal of data-efficient learning on single tasks (Lin, 1992; Mnih et al., 2015; Gu et al., 2017). Research in this vein has included prioritized replay for maximizing the impact of rare experiences (Schaul et al., 2016), learning from human demonstration data seeded into a buffer (Hester et al., 2017), and methods for approximating replay buffers with generative models (Shin et al., 2017). A noteworthy use of experience replay buffers to protect against catastrophic forgetting was demonstrated in Isele & Cosgun (2018) on toy tasks, with a focus on how buffers can be made smaller. Previous works (Gu et al., 2017; O’Donoghue et al., 2016; Wang et al., 2016) have explored mixing on- and off-policy updates in RL, though these were focused on speed and stability in individual tasks and did not examine continual learning. Here, in CLEAR, we demonstrate that a mixture of replay data and fresh experience protects against catastrophic forgetting while also permitting fast learning, and performs better than either pure onpolicy learning or pure off-policy learning from replay. We provide a thorough investigation and robust algorithm in CLEAR, conducting a wide variety of tests on both limited-size and unbounded buffers with complex RL tasks using state-of-the-art methods, and improve the stability of simple replay with the addition of behavioral cloning. 3 THE CLEAR METHOD CLEAR uses actor-critic training on a mixture of new and replayed experiences. In the case of replay experiences, two additional loss terms are added to induce behavioral cloning between the network and its past self. The motivation for behavioral cloning is to prevent network output on replayed tasks from drifting while learning new tasks. We penalize (1) the KL divergence between the historical policy distribution and the present policy distribution, (2) the L2 norm of the difference between the historical and present value functions. Formally, this corresponds to adding the following loss functions, defined with respect to network parameters θ: Lpolicy-cloning := ∑ a µ(a|hs) log µ(a|hs) πθ(a|hs) , Lvalue-cloning := ||Vθ(hs)− Vreplay(hs)||22, where πθ denotes the (current) policy of the network over actions a, µ the policy generating the observed experience, and hs the hidden state of the network at time s. Note that computing KL[µ||πθ] instead of KL[πθ||µ] ensures that πθ(a|hs) is nonzero wherever the historical policy is as well. We apply CLEAR in a distributed training context based on the Importance Weighted Actor-Learner Architecture (Espeholt et al., 2018). A single learning network is fed experiences (both novel and replay) by a number of acting networks, for which the weights are asynchronously updated to match those of the learner. The network architecture and hyperparameters are chosen as in Espeholt et al. (2018). Training proceeds according to V-Trace. Namely, define the V-Trace target vs by: vs := V (hs) + s+n−1∑ t=s γt−s ( t−1∏ i=s ci ) δtV, where δtV := ρt (rt + γV (ht+1)− V (ht)), ci := min(c̄, πθ(ai|hi)µ(ai|hi) ), and ρt = min(ρ̄, πθ(at|ht) µ(at|ht) ), for constants c̄ and ρ̄. Then, the value function update is given by the L2 loss: Lvalue := (Vθ(hs)− vs)2 . The policy gradient loss is: Lpolicy-gradient := −ρs log πθ(as|hs) (rs + γvs+1 − Vθ(hs)) . We also use an entropy loss: Lentropy := ∑ a πθ(a|hs) log πθ(a|hs). The loss functions Lvalue, Lpolicy-gradient, and Lentropy are applied both for new and replay experiences. In addition, we add Lpolicy-cloning and Lvalue-cloning for replay experiences only. In general, our experiments use a 50-50 mixture of novel and replay experiences, though performance does not appear to be very sensitive to this ratio. Further implementation details are given in Appendix A. 4 RESULTS 4.1 CATASTROPHIC FORGETTING VS. INTERFERENCE Our first experiment (Figure 1) was designed to distinguish between two distinct concepts that are sometimes conflated, interference and catastrophic forgetting, and to emphasize the outsized role of the latter as compared to the former. Interference occurs when two or more tasks are incompatible (destructive interference) or mutually helpful (constructive interference) within the same model. Catastrophic forgetting occurs when a task’s performance goes down not as a result of incompatibility with another task but as a result of the second task overwriting it within the model. As we aim to illustrate, the two are independent phenomena, and while interference may happen, forgetting is ubiquitous. We considered a set of three distinct tasks within the DMLab set of environments (Beattie et al., 2016), and compared three training paradigms on which a network may be trained to perform these three tasks: (1) Training networks on the individual tasks separately, (2) training a single network examples from all tasks simultaneously (which permits interference among tasks), and (3) training a single network sequentially on examples from one task, then the next task, and so on cyclically. Across all training protocols, the total amount of experience for each task was held constant. Thus, for separate networks training on separate tasks, the x-axis in our plots shows the total number of environment frames summed across all tasks. For example, at three million frames, one million were on task 1, one million on task 2, and one million on task 3. This allows a direct comparison to simultaneous training, in which the same network was trained on all three tasks. We observe that in DMLab, there is very little difference between separate and simultaneous training. This indicates minimal interference between tasks. If anything, there is a small amount of constructive interference, with simultaneous training performing slightly better than separate training. We assume this is a result of (i) commonalities in image processing required across different tasks, and (ii) certain basic exploratory behaviors, e.g., moving around, that are advantageous across tasks. (By contrast, destructive interference might result from incompatible behaviors or from insufficient model capacity.) By contrast, there is a large difference between either of the above modes of training and sequential training, where performance on a task decays immediately when training switches to another task – that is, catastrophic forgetting. Note that the performance of the sequential training appears at some points to be greater than that of separate training. This is purely because in sequential training, training proceeds exclusively on a single task, then exclusively on another task. For example, the first task quickly increases in performance since the network is effectively seeing three times as much data on that task as the networks training on separate or simultaneous tasks. 4.2 CLEAR We here demonstrate the efficacy of CLEAR for diminishing catastrophic forgetting (Figure 2). We apply CLEAR to the cyclically repeating sequence of DMLab tasks used in the preceding experiment. Our method effectively eliminates forgetting on all three tasks, while preserving overall training performance (see “Sequential” training in Figure 1 for reference). When the task switches, there is little, if any, dropoff in performance when using CLEAR, and the network picks up immediately where it left off once a task returns later in training. Without behavioral cloning, the mixture of new experience and replay still reduces catastrophic forgetting, though the effect is reduced. 4.3 BALANCE OF ON- AND OFF-POLICY LEARNING In this experiment (Figure 3), we consider the ratio of new examples to replay examples during training. Using 100% new examples is simply standard training, which as we have seen is subject to dramatic catastrophic forgetting. At 75-25 new-replay, there is already significant resistance to forgetting. At the opposite extreme, 100% replay examples is extremely resistant to catastrophic forgetting, but at the expense of a (slight) decrease in performance attained. We believe that 50- 50 new-replay represents a good tradeoff, combining significantly reduced catastrophic forgetting with no appreciable decrease in performance attained. Unless otherwise stated, our experiments on CLEAR will use a 50-50 split of new and replay data in training. It is notable that it is possible to train purely on replay examples, since the network has essentially no on-policy learning. In fact, the figure shows that with 100% replay, performance on each task increases throughout, even when on-policy learning is being applied to a different task. Just as Figure 2 shows the importance of behavioral cloning for maintaining past performance on a task, so this experiment shows that off-policy learning can actually increase performance from replay alone. Both ingredients are necessary for the success of CLEAR. 4.4 LIMITED-SIZE BUFFERS In some cases, it may be impractical to store all past experiences in the replay buffer. We therefore test the efficacy of buffers that have capacity for only a relatively small number of experiences (Figure 4). Once the buffer is full, we use reservoir sampling to decide when to replace elements of the buffer with new experiences (Isele & Cosgun, 2018) (see details in Appendix A). Thus, at each point in time, the buffer contains a (fixed size) sample uniformly at random of all past experiences. We consider a sequence of tasks with 900 million environmental frames, comparing a large buffer of capacity 450 million to two small buffers of capacity 5 and 50 million. We find that all buffers perform well and conclude that it is possible to learn and reduce catastrophic forgetting even with a replay buffer that is significantly smaller than the total number of experiences. Decreasing the buffer size to 5 million results in a slight decrease in robustness to catastrophic forgetting. This may be due to over-fitting to the limited examples present in the buffer, on which the learner trains disproportionately often. 4.5 LEARNING A NEW TASK QUICKLY It is a reasonable worry that relying on a replay buffer could cause new tasks to be learned more slowly as the new task data will make up a smaller and smaller portion of the replay buffer as the buffer gets larger. In this experiment (Figure 5), we find that this is not a problem for CLEAR, relying as it does on a mixture of off- and on-policy learning. Specifically, we find the performance attained on a task is largely independent of the amount of data stored in the buffer and on the identities of the preceding tasks. We consider a cyclically repeating sequence of three DMLab tasks. At different points in the sequence, we insert a fourth DMLab task as a “probe”. We find that the performance attained on the probe task is independent of the point at which it is introduced within the training sequence. This is true both for normal training and for CLEAR. Notably, CLEAR succeeds in greatly reducing catastrophic forgetting for all tasks, and the effect on the probe task does not diminish as the probe task is introduced later on in the training sequence. See also Appendix B for an experiment demonstrating that pure off-policy learning performs quite differently in this setting. 4.6 COMPARISON TO P&C AND EWC Finally, we compare our method to Progress & Compress (P&C) (Schwarz et al., 2018) and Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017), state-of-the-art methods for reducing catastrophic forgetting that, unlike replay, assume that the boundaries between different tasks are known (Figure 6). We use exactly the same sequence of Atari tasks as the authors of P&C (Schwarz et al., 2018), with the same time spent on each task. Likewise, the network and hyperparameters we use are designed to match exactly those used in Schwarz et al. (2018). This is simplified by the authors of P&C also using a training paradigm based on that in Espeholt et al. (2018). In this case, we use CLEAR with a 75-25 balance of new-replay experience. We find that we obtain comparable performance to P&C and better performance than EWC, despite CLEAR being significantly simpler and agnostic to the boundaries between tasks. On tasks krull, hero, and ms pacman, we obtain significantly higher performance than P&C (as well as EWC), while on beam rider and star gunner, P&C obtains higher performance. It is worth noting that though the on-policy model (baseline) experiences significant catastrophic forgetting, it also rapidly re-acquires its previous performance after re-exposure to the task; this allows baseline to be cumulatively better than EWC on some tasks (as is noted in the original paper Kirkpatrick et al. (2017)). An alternative plot of this experiment, showing cumulative performance on each task, is presented in Appendix C. 5 DISCUSSION Some version of replay is believed to be present in biological brains. We do not believe that our implementation is reflective of neurobiology, though there are potential connections; hippocampal replay has been proposed as a systems-level mechanism to reduce catastrophic forgetting and improve generalization as in the theory of complementary learning systems (McClelland, 1998). This contrasts to some degree with synapse-level consolidation, which is also believed to be present in biology (Benna & Fusi, 2016), but is more like continual learning methods that protect parameters. Indeed, algorithms for continual learning may live on a Pareto frontier: different methods may have different regimes of applicability. In cases for which storing a large memory buffer is truly prohibitive, methods that protect inferred parameters, such as Progress & Compress, may be more suitable than replay methods. When task identities are available or boundaries between tasks are very clear, leveraging this information may reduce memory or computational demands or be useful to alert the agent to engage in rapid learning. Further, there exist training scenarios that are adversarial either to our method or to any method that prevents forgetting. For example, if the action space of a task were changed during training, fitting to the old policy’s action distribution, whether through behavioral cloning, off-policy learning, weight protection, or any of a number of other strategies for preventing catastrophic forgetting, could have a deleterious effect on future performance. For such cases, we may need to develop algorithms that selectively protect skills as well as forget them. We have explored CLEAR in a range of continual learning scenarios; we hope that some of the experimental protocols, such as probing with a novel task at varied positions in a training sequence, may inspire other research. Moving forward, we anticipate many algorithmic innovations that build on the ideas set forward here. For example, weight-consolidation techniques such as Progress & Compress are quite orthogonal to our approach and could be married with it for further performance gains. Moreover, while the V-Trace algorithm we use is effective at off-policy correction for small shifts between the present and past policy distributions, it is possible that off-policy approaches leveraging Q-functions, such as Retrace (Munos et al., 2016), may prove more powerful still. We have described a simple but powerful approach for preventing catastrophic forgetting in continual learning settings. CLEAR uses on-policy learning on fresh experiences to adapt rapidly to new tasks, while using off-policy learning with behavioral cloning on replay experience to maintain and modestly enhance performance on past tasks. Behavioral cloning on replay data further enhances the agent’s stability. Our method is simple, scalable, and practical; it takes advantage of the general abundance of memory and storage in modern computers and computing facilities. We believe that the broad applicability and simplicity of the approach make CLEAR a candidate “first line of defense” against catastrophic forgetting in many RL contexts. A.1 DISTRIBUTED SETUP Our training setup was based on that of Espeholt et al. (2018), with multiple actors and a single learner. The actors (which run on CPU) generate training examples, which are then sent to the learner. Weight updates made by the learner are propagated asynchronously to the actors. The workflows for each actor and for the learner are described below in more detail. Actor. A training episode (unroll) is generated and inserted into the actor’s buffer. Reservoir sampling is used (see further details below) if the buffer has reached its maximum capacity. The actor then samples another unroll from the buffer. The new unroll and replay unroll are both fed into a queue of examples that are read by the learner. The actor waits until its last example in the queue has been read before creating another. Learner. Each element of a batch is a pair (new unroll, replay unroll) from the queue provided by actors. Thus, the number of new unrolls and the number of replay unrolls both equal the entire batch size. Depending on the buffer utilization hyperparameter (see Figure 3), the learner uses a balance of new and replay examples, taking either the new unroll or the replay unroll from each pair. Thus, no actor contributes more than a single example to the batch (reducing the variance of batches). A.2 NETWORK For our DMLab experiments, we used the same network as in the DMLab experiments of Espeholt et al. (2018). We selected the shallower of the models considered there (a network based on Mnih et al. (2015)), omitting the additional LSTM module used for processing textual input since none of the tasks we considered included such input. For Atari, we used the same network in Progress & Compress (Schwarz et al., 2018) (which is also based on Espeholt et al. (2018)), also copying all hyperparameters. A.3 BUFFERS Our replay buffer stores all information necessary for the V-Trace algorithm, namely the input presented by the environment, the output logits of the network, the value function output by the network, the action taken, and the reward obtained. Leveraging the distributed setup, the buffer is split among all actors equally, so that, for example, if the total buffer size were one million across a hundred actors, then each actor would have buffer capacity of ten thousand. All buffer sizes are measured in environment frames (not in numbers of unrolls), in keeping with the x-axis of our training plots. For baseline experiments, no buffer was used, while all other parameters and the network remained constant. Unless otherwise specified, the replay buffer was capped at half the number of environment frames on which the network is trained. This is by design – to show that even past the buffer capacity, replay continues to prevent catastrophic forgetting. When the buffer fills up, then new unrolls are added by reservoir sampling, so that the buffer at any given point contains a uniformly random sample of all unrolls up until the present time. Reservoir sampling is implemented as in Isele & Cosgun (2018) by having each unroll associated with a random number between 0 and 1. A threshold is initialized to 0 and rises with time so that the number of unrolls above the threshold is fixed at the capacity of the buffer. Each unroll is either stored or abandoned in its entirety; no unroll is partially stored, as this would preclude training. A.4 TRAINING Training was conducted using V-Trace, with hyperparameters on DMLab/Atari tasks set as in Espeholt et al. (2018). Behavioral cloning loss functions Lpolicy-cloning and Lvalue-cloning were added in some experiments with weights of 0.01 and 0.005, respectively. The established loss functions Lpolicy-gradient, Lvalue, and Lentropy were applied with weights of 1, 0.5, and ≈0.005, in keeping with Espeholt et al. (2018). No significant effort was made to optimize fully the hyperparameters for CLEAR. A.5 EVALUATION We evaluate each network during training on all tasks, not simply that task on which it is currently being trained. Evaluation is performed by pools of testing actors, with a separate pool for each task in question. Each pool of testing actors asynchronously updates its weights to match those of the learner, similarly to the standard (training) actors used in our distributed learning setup. The key differences are that each testing actor (i) has no replay buffer, (ii) does not feed examples to the learner for training, (iii) runs on its designated task regardless of whether this task is the one currently in use by training actors. A.6 EXPERIMENTS In many of our experiments, we consider tasks that change after a specified number of learning episodes. The total number of episodes is monitored by the learner, and all actors switch between tasks simultaneously at the designated point, henceforward feeding examples to the learner based on experiences on the new task (as well as replay examples). Each experiment was run independently three times; figures plot the mean performance across runs, with error bars showing the standard deviation. B PROBE TASK WITH 100% REPLAY Our goal in this experiment was to investigate more thoroughly than Figure 3 to what extent the mixture of on- and off-policy learning is necessary, instead of pure off-policy learning, in learning new tasks swiftly. We rerun our “probe task” experiments (Section 4.5), where the DMLab task natlab varying map randomized is presented at different positions in a cyclically repeating sequence of other DMLab tasks. In this case, however, we use CLEAR with 100% (off-policy) replay experience. We observe that, unlike in the original experiment (Figure 5), the performance obtained on the probe task natlab varying map randomized deteriorates markedly as it appears later in the sequence of tasks. For later positions in the sequence, the probe task comprises a smaller percentage of replay experience, thereby impeding purely off-policy learning. This result underlines why CLEAR uses new experience, as well as replay, to allow rapid learning of new tasks. C FIGURES REPLOTTED ACCORDING TO CUMULATIVE SUM In this section, we replot the results of our main experiments, so that the y-axis shows the mean cumulative reward obtained on each task during training; that is, the reward shown for time t is the average (1/t) ∑ s<t rs. This makes it easier to compare performance between models, though it smoothes out the individual periods of catastrophic forgetting. We also include tables comparing the values of the final cumulative rewards at the end of training.
1. What is the main contribution of the paper regarding continual learning? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the novelty and significance of the paper's findings? 4. Are there any concerns or suggestions regarding the experimental analysis and comparisons with other methods? 5. How does the reviewer evaluate the paper's potential impact on the field of lifelong learning and continual improvement?
Review
Review This paper proposes a particular variant of experience replay with behavior cloning as a method for continual learning. The approach achieves good performance while not requiring a task label. This paper makes the point that I definitely agree with that all of the approaches being considered should compare to experience replay and that in reality many of them rarely do better. However, I am not totally convinced when it comes to the value of the actual novel aspects of this paper. Much of the empirical analysis of experience replay (i.e. the buffer size, the ratio of past and novel experiences, etc…) was not surprising or particular novel in my eyes. The idea of using behavior cloning is motivated fully through the lens of catastrophic forgetting and promoting stability and does not at all address achieving plasticity. This was interesting to me as the authors do mention the stability-plasticity dilemma, but a more theoretical analysis of why behavior cloning is somehow the right method among various choices to promote stability while not sacrificing or improving plasticity was definitely missing for me. Other options can certainly be considered as well if your aim is just to add stability to experience replay such a notion of weight importance for the past like in EwC (Kirkpatric et al., 2017) and many other papers or using knowledge distillation like LwF (Li and Hoeim, 2016). LwF in particular seems quite related. I wonder how LwF + experience replay compares to the approach proposed here. In general the discourse could become a lot strong in my eyes if it really considered various alternatives and explained why behavior cloning provides theoretical value. Overall, behavior cloning seems to help a little bit based on the experiments provided, but this finding is very likely indicative of the particular problem setting and seemingly not really a game changer. In the paper, they explore settings with fairly prolonged periods of training in each RL domain one at a time. If the problem was to become more non-stationary with more frequent switching (i.e. more in line with the motivation of lifelong learning), I would imagine that increasing stability is not necessarily a good thing and may slow down future learning.
ICLR
Title Experience replay for continual learning Abstract Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions. This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills. We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence. Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience. While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution – that of using experience replay buffers for all past events – with a mixture of onand off-policy learning, leveraging behavioral cloning. We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one. 1 INTRODUCTION Modern day reinforcement learning (RL) has benefited substantially from a massive influx of computational resources. In some instances, the number of data points to feed into RL algorithms has kept in step with computational feasibility. For example, in simulation environments or in self-play RL, it is possible to generate fresh data on the fly. In such settings, the continual learning problem (Ring, 1997) is often ignored because new experiences can be collected on demand, and the start states of the simulation can be controlled. When training on multiple tasks, it is possible to train on all environments simultaneously within the same data batch. As RL is increasingly applied to problems in industry or other real-world settings, however, it is necessary to consider cases, such as robotics, where gathering new experience is expensive or difficult. In such examples, simultaneous training may be infeasible. Instead, an agent must be able to learn from only one task at a time. The time spent on different tasks and the sequence in which those tasks occur are not under the control of the agent. The boundaries between tasks, in fact, will often be unknown – or tasks will deform continuously and not have definite boundaries at all. Such a paradigm for training eliminates the possibility of simultaneously acting upon and learning from several tasks, and leads to the danger of catastrophic forgetting, wherein an agent forgets what it has learned previously when it encounters a new situation. Here, we consider the setting of reinforcement learning where compute and memory resources are large, but the environment is not stationary: this may arise because an RL agent is encountering a task curriculum or sequence of unrelated tasks, engaged in a budgeted physical interaction within a robot, or learning from unstructured interaction with humans. In this setting, the problem of continual learning rears its head: the distribution over experiences is not controlled to facilitate the agent’s maintenance of previously acquired ability. An ideal continual learning system should meet three requirements. First, it should retain previously learned capacities. When a previously encountered task or situation is encountered, performance should immediately be good – ideally as good as it was historically. Second, maintenance of old skills or knowledge should not inhibit further rapid acquisition of a new skill or knowledge. These two simultaneous constraints – maintaining the old while still adapting to the new – represent the challenge known as the stability-plasticity dilemma Grossberg (1982). Third, where possible, a continual learning system should learn new skills that are related to old ones faster than it would have de novo, a property known as constructive interference or positive transfer. We here demonstrate the surprising power of a simple approach: Continual Learning with Experience And Replay (CLEAR). We show that training a network on a mixture of novel experience on-policy and replay experience off-policy allows for both maintenance of performance on earlier tasks and fast adaptation to new tasks. A significant further boost in performance and reduction in catastrophic forgetting is obtained by enforcing behavioral cloning between the current policy and its past self. While memory is rarely severely limited in modern RL, we show that small replay buffers filled with uniform samples from past experiences can be almost as effective as buffers of unbounded size. When comparing CLEAR against state-of-the-art approaches for reducing catastrophic forgetting, we obtain better or comparable results, despite the relative simplicity of our approach; yet, crucially, CLEAR requires no information about the identity of tasks or boundaries between them. 2 RELATED WORK The problem of catastrophic forgetting in neural networks has long been recognized (Grossberg, 1982), and it is known that rehearsing past data can be a satisfactory antidote for some purposes (McClelland, 1998; French, 1999). Consequently, in the supervised setting that is the most common paradigm in machine learning, catastrophic forgetting has been accorded less attention than in cognitive science or neuroscience, since a fixed dataset can be reordered and replayed as necessary to ensure high performance on all samples. In recent years, however, there has been renewed interest in overcoming catastrophic forgetting in RL contexts and in supervised learning from streaming data (Parisi et al., 2018). Current strategies for mitigating catastrophic forgetting have primarily focused on schemes for protecting the parameters inferred in one task while training on another. For example, in Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017), weights important for past tasks are constrained to change more slowly while learning new tasks. The Progressive Networks approach (Rusu et al., 2016) freezes subnetworks trained on individual tasks, and Progress & Compress (Schwarz et al., 2018) uses EWC to consolidate the network after each task has been learned. Kaplanis et al. (2018) treat individual synaptic weights as dynamical systems with latent dimensions / states that protect information. Outside of RL, Zenke et al. (2017) develop a method similar to EWC that maintains estimates of the importance of weights for past tasks, Li & Hoiem (2017) leverage a mixture of task-specific and shared parameters, and Milan et al. (2016) develop a rigorous Bayesian approach for estimating unknown task boundaries. Notably all these methods assume that task identities or boundaries are known, with the exception of Milan et al. (2016), for which the approach is likely not scalable to highly complex tasks. Rehearsing old data via experience replay buffers is a common technique in RL. However, their introduction has primarily been driven by the goal of data-efficient learning on single tasks (Lin, 1992; Mnih et al., 2015; Gu et al., 2017). Research in this vein has included prioritized replay for maximizing the impact of rare experiences (Schaul et al., 2016), learning from human demonstration data seeded into a buffer (Hester et al., 2017), and methods for approximating replay buffers with generative models (Shin et al., 2017). A noteworthy use of experience replay buffers to protect against catastrophic forgetting was demonstrated in Isele & Cosgun (2018) on toy tasks, with a focus on how buffers can be made smaller. Previous works (Gu et al., 2017; O’Donoghue et al., 2016; Wang et al., 2016) have explored mixing on- and off-policy updates in RL, though these were focused on speed and stability in individual tasks and did not examine continual learning. Here, in CLEAR, we demonstrate that a mixture of replay data and fresh experience protects against catastrophic forgetting while also permitting fast learning, and performs better than either pure onpolicy learning or pure off-policy learning from replay. We provide a thorough investigation and robust algorithm in CLEAR, conducting a wide variety of tests on both limited-size and unbounded buffers with complex RL tasks using state-of-the-art methods, and improve the stability of simple replay with the addition of behavioral cloning. 3 THE CLEAR METHOD CLEAR uses actor-critic training on a mixture of new and replayed experiences. In the case of replay experiences, two additional loss terms are added to induce behavioral cloning between the network and its past self. The motivation for behavioral cloning is to prevent network output on replayed tasks from drifting while learning new tasks. We penalize (1) the KL divergence between the historical policy distribution and the present policy distribution, (2) the L2 norm of the difference between the historical and present value functions. Formally, this corresponds to adding the following loss functions, defined with respect to network parameters θ: Lpolicy-cloning := ∑ a µ(a|hs) log µ(a|hs) πθ(a|hs) , Lvalue-cloning := ||Vθ(hs)− Vreplay(hs)||22, where πθ denotes the (current) policy of the network over actions a, µ the policy generating the observed experience, and hs the hidden state of the network at time s. Note that computing KL[µ||πθ] instead of KL[πθ||µ] ensures that πθ(a|hs) is nonzero wherever the historical policy is as well. We apply CLEAR in a distributed training context based on the Importance Weighted Actor-Learner Architecture (Espeholt et al., 2018). A single learning network is fed experiences (both novel and replay) by a number of acting networks, for which the weights are asynchronously updated to match those of the learner. The network architecture and hyperparameters are chosen as in Espeholt et al. (2018). Training proceeds according to V-Trace. Namely, define the V-Trace target vs by: vs := V (hs) + s+n−1∑ t=s γt−s ( t−1∏ i=s ci ) δtV, where δtV := ρt (rt + γV (ht+1)− V (ht)), ci := min(c̄, πθ(ai|hi)µ(ai|hi) ), and ρt = min(ρ̄, πθ(at|ht) µ(at|ht) ), for constants c̄ and ρ̄. Then, the value function update is given by the L2 loss: Lvalue := (Vθ(hs)− vs)2 . The policy gradient loss is: Lpolicy-gradient := −ρs log πθ(as|hs) (rs + γvs+1 − Vθ(hs)) . We also use an entropy loss: Lentropy := ∑ a πθ(a|hs) log πθ(a|hs). The loss functions Lvalue, Lpolicy-gradient, and Lentropy are applied both for new and replay experiences. In addition, we add Lpolicy-cloning and Lvalue-cloning for replay experiences only. In general, our experiments use a 50-50 mixture of novel and replay experiences, though performance does not appear to be very sensitive to this ratio. Further implementation details are given in Appendix A. 4 RESULTS 4.1 CATASTROPHIC FORGETTING VS. INTERFERENCE Our first experiment (Figure 1) was designed to distinguish between two distinct concepts that are sometimes conflated, interference and catastrophic forgetting, and to emphasize the outsized role of the latter as compared to the former. Interference occurs when two or more tasks are incompatible (destructive interference) or mutually helpful (constructive interference) within the same model. Catastrophic forgetting occurs when a task’s performance goes down not as a result of incompatibility with another task but as a result of the second task overwriting it within the model. As we aim to illustrate, the two are independent phenomena, and while interference may happen, forgetting is ubiquitous. We considered a set of three distinct tasks within the DMLab set of environments (Beattie et al., 2016), and compared three training paradigms on which a network may be trained to perform these three tasks: (1) Training networks on the individual tasks separately, (2) training a single network examples from all tasks simultaneously (which permits interference among tasks), and (3) training a single network sequentially on examples from one task, then the next task, and so on cyclically. Across all training protocols, the total amount of experience for each task was held constant. Thus, for separate networks training on separate tasks, the x-axis in our plots shows the total number of environment frames summed across all tasks. For example, at three million frames, one million were on task 1, one million on task 2, and one million on task 3. This allows a direct comparison to simultaneous training, in which the same network was trained on all three tasks. We observe that in DMLab, there is very little difference between separate and simultaneous training. This indicates minimal interference between tasks. If anything, there is a small amount of constructive interference, with simultaneous training performing slightly better than separate training. We assume this is a result of (i) commonalities in image processing required across different tasks, and (ii) certain basic exploratory behaviors, e.g., moving around, that are advantageous across tasks. (By contrast, destructive interference might result from incompatible behaviors or from insufficient model capacity.) By contrast, there is a large difference between either of the above modes of training and sequential training, where performance on a task decays immediately when training switches to another task – that is, catastrophic forgetting. Note that the performance of the sequential training appears at some points to be greater than that of separate training. This is purely because in sequential training, training proceeds exclusively on a single task, then exclusively on another task. For example, the first task quickly increases in performance since the network is effectively seeing three times as much data on that task as the networks training on separate or simultaneous tasks. 4.2 CLEAR We here demonstrate the efficacy of CLEAR for diminishing catastrophic forgetting (Figure 2). We apply CLEAR to the cyclically repeating sequence of DMLab tasks used in the preceding experiment. Our method effectively eliminates forgetting on all three tasks, while preserving overall training performance (see “Sequential” training in Figure 1 for reference). When the task switches, there is little, if any, dropoff in performance when using CLEAR, and the network picks up immediately where it left off once a task returns later in training. Without behavioral cloning, the mixture of new experience and replay still reduces catastrophic forgetting, though the effect is reduced. 4.3 BALANCE OF ON- AND OFF-POLICY LEARNING In this experiment (Figure 3), we consider the ratio of new examples to replay examples during training. Using 100% new examples is simply standard training, which as we have seen is subject to dramatic catastrophic forgetting. At 75-25 new-replay, there is already significant resistance to forgetting. At the opposite extreme, 100% replay examples is extremely resistant to catastrophic forgetting, but at the expense of a (slight) decrease in performance attained. We believe that 50- 50 new-replay represents a good tradeoff, combining significantly reduced catastrophic forgetting with no appreciable decrease in performance attained. Unless otherwise stated, our experiments on CLEAR will use a 50-50 split of new and replay data in training. It is notable that it is possible to train purely on replay examples, since the network has essentially no on-policy learning. In fact, the figure shows that with 100% replay, performance on each task increases throughout, even when on-policy learning is being applied to a different task. Just as Figure 2 shows the importance of behavioral cloning for maintaining past performance on a task, so this experiment shows that off-policy learning can actually increase performance from replay alone. Both ingredients are necessary for the success of CLEAR. 4.4 LIMITED-SIZE BUFFERS In some cases, it may be impractical to store all past experiences in the replay buffer. We therefore test the efficacy of buffers that have capacity for only a relatively small number of experiences (Figure 4). Once the buffer is full, we use reservoir sampling to decide when to replace elements of the buffer with new experiences (Isele & Cosgun, 2018) (see details in Appendix A). Thus, at each point in time, the buffer contains a (fixed size) sample uniformly at random of all past experiences. We consider a sequence of tasks with 900 million environmental frames, comparing a large buffer of capacity 450 million to two small buffers of capacity 5 and 50 million. We find that all buffers perform well and conclude that it is possible to learn and reduce catastrophic forgetting even with a replay buffer that is significantly smaller than the total number of experiences. Decreasing the buffer size to 5 million results in a slight decrease in robustness to catastrophic forgetting. This may be due to over-fitting to the limited examples present in the buffer, on which the learner trains disproportionately often. 4.5 LEARNING A NEW TASK QUICKLY It is a reasonable worry that relying on a replay buffer could cause new tasks to be learned more slowly as the new task data will make up a smaller and smaller portion of the replay buffer as the buffer gets larger. In this experiment (Figure 5), we find that this is not a problem for CLEAR, relying as it does on a mixture of off- and on-policy learning. Specifically, we find the performance attained on a task is largely independent of the amount of data stored in the buffer and on the identities of the preceding tasks. We consider a cyclically repeating sequence of three DMLab tasks. At different points in the sequence, we insert a fourth DMLab task as a “probe”. We find that the performance attained on the probe task is independent of the point at which it is introduced within the training sequence. This is true both for normal training and for CLEAR. Notably, CLEAR succeeds in greatly reducing catastrophic forgetting for all tasks, and the effect on the probe task does not diminish as the probe task is introduced later on in the training sequence. See also Appendix B for an experiment demonstrating that pure off-policy learning performs quite differently in this setting. 4.6 COMPARISON TO P&C AND EWC Finally, we compare our method to Progress & Compress (P&C) (Schwarz et al., 2018) and Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017), state-of-the-art methods for reducing catastrophic forgetting that, unlike replay, assume that the boundaries between different tasks are known (Figure 6). We use exactly the same sequence of Atari tasks as the authors of P&C (Schwarz et al., 2018), with the same time spent on each task. Likewise, the network and hyperparameters we use are designed to match exactly those used in Schwarz et al. (2018). This is simplified by the authors of P&C also using a training paradigm based on that in Espeholt et al. (2018). In this case, we use CLEAR with a 75-25 balance of new-replay experience. We find that we obtain comparable performance to P&C and better performance than EWC, despite CLEAR being significantly simpler and agnostic to the boundaries between tasks. On tasks krull, hero, and ms pacman, we obtain significantly higher performance than P&C (as well as EWC), while on beam rider and star gunner, P&C obtains higher performance. It is worth noting that though the on-policy model (baseline) experiences significant catastrophic forgetting, it also rapidly re-acquires its previous performance after re-exposure to the task; this allows baseline to be cumulatively better than EWC on some tasks (as is noted in the original paper Kirkpatrick et al. (2017)). An alternative plot of this experiment, showing cumulative performance on each task, is presented in Appendix C. 5 DISCUSSION Some version of replay is believed to be present in biological brains. We do not believe that our implementation is reflective of neurobiology, though there are potential connections; hippocampal replay has been proposed as a systems-level mechanism to reduce catastrophic forgetting and improve generalization as in the theory of complementary learning systems (McClelland, 1998). This contrasts to some degree with synapse-level consolidation, which is also believed to be present in biology (Benna & Fusi, 2016), but is more like continual learning methods that protect parameters. Indeed, algorithms for continual learning may live on a Pareto frontier: different methods may have different regimes of applicability. In cases for which storing a large memory buffer is truly prohibitive, methods that protect inferred parameters, such as Progress & Compress, may be more suitable than replay methods. When task identities are available or boundaries between tasks are very clear, leveraging this information may reduce memory or computational demands or be useful to alert the agent to engage in rapid learning. Further, there exist training scenarios that are adversarial either to our method or to any method that prevents forgetting. For example, if the action space of a task were changed during training, fitting to the old policy’s action distribution, whether through behavioral cloning, off-policy learning, weight protection, or any of a number of other strategies for preventing catastrophic forgetting, could have a deleterious effect on future performance. For such cases, we may need to develop algorithms that selectively protect skills as well as forget them. We have explored CLEAR in a range of continual learning scenarios; we hope that some of the experimental protocols, such as probing with a novel task at varied positions in a training sequence, may inspire other research. Moving forward, we anticipate many algorithmic innovations that build on the ideas set forward here. For example, weight-consolidation techniques such as Progress & Compress are quite orthogonal to our approach and could be married with it for further performance gains. Moreover, while the V-Trace algorithm we use is effective at off-policy correction for small shifts between the present and past policy distributions, it is possible that off-policy approaches leveraging Q-functions, such as Retrace (Munos et al., 2016), may prove more powerful still. We have described a simple but powerful approach for preventing catastrophic forgetting in continual learning settings. CLEAR uses on-policy learning on fresh experiences to adapt rapidly to new tasks, while using off-policy learning with behavioral cloning on replay experience to maintain and modestly enhance performance on past tasks. Behavioral cloning on replay data further enhances the agent’s stability. Our method is simple, scalable, and practical; it takes advantage of the general abundance of memory and storage in modern computers and computing facilities. We believe that the broad applicability and simplicity of the approach make CLEAR a candidate “first line of defense” against catastrophic forgetting in many RL contexts. A.1 DISTRIBUTED SETUP Our training setup was based on that of Espeholt et al. (2018), with multiple actors and a single learner. The actors (which run on CPU) generate training examples, which are then sent to the learner. Weight updates made by the learner are propagated asynchronously to the actors. The workflows for each actor and for the learner are described below in more detail. Actor. A training episode (unroll) is generated and inserted into the actor’s buffer. Reservoir sampling is used (see further details below) if the buffer has reached its maximum capacity. The actor then samples another unroll from the buffer. The new unroll and replay unroll are both fed into a queue of examples that are read by the learner. The actor waits until its last example in the queue has been read before creating another. Learner. Each element of a batch is a pair (new unroll, replay unroll) from the queue provided by actors. Thus, the number of new unrolls and the number of replay unrolls both equal the entire batch size. Depending on the buffer utilization hyperparameter (see Figure 3), the learner uses a balance of new and replay examples, taking either the new unroll or the replay unroll from each pair. Thus, no actor contributes more than a single example to the batch (reducing the variance of batches). A.2 NETWORK For our DMLab experiments, we used the same network as in the DMLab experiments of Espeholt et al. (2018). We selected the shallower of the models considered there (a network based on Mnih et al. (2015)), omitting the additional LSTM module used for processing textual input since none of the tasks we considered included such input. For Atari, we used the same network in Progress & Compress (Schwarz et al., 2018) (which is also based on Espeholt et al. (2018)), also copying all hyperparameters. A.3 BUFFERS Our replay buffer stores all information necessary for the V-Trace algorithm, namely the input presented by the environment, the output logits of the network, the value function output by the network, the action taken, and the reward obtained. Leveraging the distributed setup, the buffer is split among all actors equally, so that, for example, if the total buffer size were one million across a hundred actors, then each actor would have buffer capacity of ten thousand. All buffer sizes are measured in environment frames (not in numbers of unrolls), in keeping with the x-axis of our training plots. For baseline experiments, no buffer was used, while all other parameters and the network remained constant. Unless otherwise specified, the replay buffer was capped at half the number of environment frames on which the network is trained. This is by design – to show that even past the buffer capacity, replay continues to prevent catastrophic forgetting. When the buffer fills up, then new unrolls are added by reservoir sampling, so that the buffer at any given point contains a uniformly random sample of all unrolls up until the present time. Reservoir sampling is implemented as in Isele & Cosgun (2018) by having each unroll associated with a random number between 0 and 1. A threshold is initialized to 0 and rises with time so that the number of unrolls above the threshold is fixed at the capacity of the buffer. Each unroll is either stored or abandoned in its entirety; no unroll is partially stored, as this would preclude training. A.4 TRAINING Training was conducted using V-Trace, with hyperparameters on DMLab/Atari tasks set as in Espeholt et al. (2018). Behavioral cloning loss functions Lpolicy-cloning and Lvalue-cloning were added in some experiments with weights of 0.01 and 0.005, respectively. The established loss functions Lpolicy-gradient, Lvalue, and Lentropy were applied with weights of 1, 0.5, and ≈0.005, in keeping with Espeholt et al. (2018). No significant effort was made to optimize fully the hyperparameters for CLEAR. A.5 EVALUATION We evaluate each network during training on all tasks, not simply that task on which it is currently being trained. Evaluation is performed by pools of testing actors, with a separate pool for each task in question. Each pool of testing actors asynchronously updates its weights to match those of the learner, similarly to the standard (training) actors used in our distributed learning setup. The key differences are that each testing actor (i) has no replay buffer, (ii) does not feed examples to the learner for training, (iii) runs on its designated task regardless of whether this task is the one currently in use by training actors. A.6 EXPERIMENTS In many of our experiments, we consider tasks that change after a specified number of learning episodes. The total number of episodes is monitored by the learner, and all actors switch between tasks simultaneously at the designated point, henceforward feeding examples to the learner based on experiences on the new task (as well as replay examples). Each experiment was run independently three times; figures plot the mean performance across runs, with error bars showing the standard deviation. B PROBE TASK WITH 100% REPLAY Our goal in this experiment was to investigate more thoroughly than Figure 3 to what extent the mixture of on- and off-policy learning is necessary, instead of pure off-policy learning, in learning new tasks swiftly. We rerun our “probe task” experiments (Section 4.5), where the DMLab task natlab varying map randomized is presented at different positions in a cyclically repeating sequence of other DMLab tasks. In this case, however, we use CLEAR with 100% (off-policy) replay experience. We observe that, unlike in the original experiment (Figure 5), the performance obtained on the probe task natlab varying map randomized deteriorates markedly as it appears later in the sequence of tasks. For later positions in the sequence, the probe task comprises a smaller percentage of replay experience, thereby impeding purely off-policy learning. This result underlines why CLEAR uses new experience, as well as replay, to allow rapid learning of new tasks. C FIGURES REPLOTTED ACCORDING TO CUMULATIVE SUM In this section, we replot the results of our main experiments, so that the y-axis shows the mean cumulative reward obtained on each task during training; that is, the reward shown for time t is the average (1/t) ∑ s<t rs. This makes it easier to compare performance between models, though it smoothes out the individual periods of catastrophic forgetting. We also include tables comparing the values of the final cumulative rewards at the end of training.
1. What is the focus of the paper regarding continual learning, and how does it alleviate catastrophic forgetting? 2. What are the strengths of the proposed method, particularly in its application in reinforcement learning? 3. Do you have any concerns or questions about the experimental design, such as training tasks cyclically and not learning each task just once? 4. How does the proposed approach utilize experience replay buffers for past events, and what is the significance of this concept? 5. Can you provide more details or explanations regarding the natlab_varying_map_randomize (probe task) performance in Figure 5? 6. How does the reviewer assess the memory requirement and runtime of the proposed method, and are there any comparisons provided? 7. Are there any additional experiments or verifications that could support the independence of the probe task across other tasks? 8. Would including more quantitative results enhance the clarity of the paper's findings?
Review
Review The paper proposes a novel trial to alleviate the catastrophic forgetting for continual learning which is kind a mixture model of on and off-policy. The core concept of the method is utilizing experience replay buffer for all past events with new experience. They mainly worked on their method in the setting of reinforcement learning. In the experiments, they show that the model successfully mitigate the catastrophic forgetting with this behavioral cloning, and has the performance comparable to recent continual learning approaches. The paper is easy to follow, and the methodology is quite intuitive and straight forward. In this paper, I have several questions. Q1. I wonder the reason that every tasks are trained cyclically in sequence. And is there any trial to learn each task just once and observe the catastrophic forgetting of them when they have to detain the learned knowledge in a long time without training them again, as does most of visual domain experiments of the other continual learning research. Q2. In figure 5, I wonder why the natlab_varying_map_ramdomize(probe task) can perform well even they didn’t learn yet. The score of brown line increases nearly 60~70% of final score(after trained) during training the first task. Because the tasks are deeply correlated? or it is just common property of probe task? Q3. Using reservoir(buffer) to prevent catastrophic forgetting is natural and reasonable. Is there some of quantitative comparison in the sense of memory requirement and runtime? I feel that 5 or 50 million experiences at each task are huge enough to memorize and manage. Additionally, in the experiment of figure 5, I think it could be much clear with a verification that the probe task is semantically independent (no interference) over all the other tasks. Also, it is quite hard to compare the performance of the models just with plots. I expect that it could be much better to show some of quantitative results(as number).
ICLR
Title Experience replay for continual learning Abstract Continual learning is the problem of learning new tasks or knowledge while protecting old knowledge and ideally generalizing from old experience to learn new tasks faster. Neural networks trained by stochastic gradient descent often degrade on old tasks when trained successively on new tasks with different data distributions. This phenomenon, referred to as catastrophic forgetting, is considered a major hurdle to learning with non-stationary data or sequences of new tasks, and prevents networks from continually accumulating knowledge and skills. We examine this issue in the context of reinforcement learning, in a setting where an agent is exposed to tasks in a sequence. Unlike most other work, we do not provide an explicit indication to the model of task boundaries, which is the most general circumstance for a learning agent exposed to continuous experience. While various methods to counteract catastrophic forgetting have recently been proposed, we explore a straightforward, general, and seemingly overlooked solution – that of using experience replay buffers for all past events – with a mixture of onand off-policy learning, leveraging behavioral cloning. We show that this strategy can still learn new tasks quickly yet can substantially reduce catastrophic forgetting in both Atari and DMLab domains, even matching the performance of methods that require task identities. When buffer storage is constrained, we confirm that a simple mechanism for randomly discarding data allows a limited size buffer to perform almost as well as an unbounded one. 1 INTRODUCTION Modern day reinforcement learning (RL) has benefited substantially from a massive influx of computational resources. In some instances, the number of data points to feed into RL algorithms has kept in step with computational feasibility. For example, in simulation environments or in self-play RL, it is possible to generate fresh data on the fly. In such settings, the continual learning problem (Ring, 1997) is often ignored because new experiences can be collected on demand, and the start states of the simulation can be controlled. When training on multiple tasks, it is possible to train on all environments simultaneously within the same data batch. As RL is increasingly applied to problems in industry or other real-world settings, however, it is necessary to consider cases, such as robotics, where gathering new experience is expensive or difficult. In such examples, simultaneous training may be infeasible. Instead, an agent must be able to learn from only one task at a time. The time spent on different tasks and the sequence in which those tasks occur are not under the control of the agent. The boundaries between tasks, in fact, will often be unknown – or tasks will deform continuously and not have definite boundaries at all. Such a paradigm for training eliminates the possibility of simultaneously acting upon and learning from several tasks, and leads to the danger of catastrophic forgetting, wherein an agent forgets what it has learned previously when it encounters a new situation. Here, we consider the setting of reinforcement learning where compute and memory resources are large, but the environment is not stationary: this may arise because an RL agent is encountering a task curriculum or sequence of unrelated tasks, engaged in a budgeted physical interaction within a robot, or learning from unstructured interaction with humans. In this setting, the problem of continual learning rears its head: the distribution over experiences is not controlled to facilitate the agent’s maintenance of previously acquired ability. An ideal continual learning system should meet three requirements. First, it should retain previously learned capacities. When a previously encountered task or situation is encountered, performance should immediately be good – ideally as good as it was historically. Second, maintenance of old skills or knowledge should not inhibit further rapid acquisition of a new skill or knowledge. These two simultaneous constraints – maintaining the old while still adapting to the new – represent the challenge known as the stability-plasticity dilemma Grossberg (1982). Third, where possible, a continual learning system should learn new skills that are related to old ones faster than it would have de novo, a property known as constructive interference or positive transfer. We here demonstrate the surprising power of a simple approach: Continual Learning with Experience And Replay (CLEAR). We show that training a network on a mixture of novel experience on-policy and replay experience off-policy allows for both maintenance of performance on earlier tasks and fast adaptation to new tasks. A significant further boost in performance and reduction in catastrophic forgetting is obtained by enforcing behavioral cloning between the current policy and its past self. While memory is rarely severely limited in modern RL, we show that small replay buffers filled with uniform samples from past experiences can be almost as effective as buffers of unbounded size. When comparing CLEAR against state-of-the-art approaches for reducing catastrophic forgetting, we obtain better or comparable results, despite the relative simplicity of our approach; yet, crucially, CLEAR requires no information about the identity of tasks or boundaries between them. 2 RELATED WORK The problem of catastrophic forgetting in neural networks has long been recognized (Grossberg, 1982), and it is known that rehearsing past data can be a satisfactory antidote for some purposes (McClelland, 1998; French, 1999). Consequently, in the supervised setting that is the most common paradigm in machine learning, catastrophic forgetting has been accorded less attention than in cognitive science or neuroscience, since a fixed dataset can be reordered and replayed as necessary to ensure high performance on all samples. In recent years, however, there has been renewed interest in overcoming catastrophic forgetting in RL contexts and in supervised learning from streaming data (Parisi et al., 2018). Current strategies for mitigating catastrophic forgetting have primarily focused on schemes for protecting the parameters inferred in one task while training on another. For example, in Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017), weights important for past tasks are constrained to change more slowly while learning new tasks. The Progressive Networks approach (Rusu et al., 2016) freezes subnetworks trained on individual tasks, and Progress & Compress (Schwarz et al., 2018) uses EWC to consolidate the network after each task has been learned. Kaplanis et al. (2018) treat individual synaptic weights as dynamical systems with latent dimensions / states that protect information. Outside of RL, Zenke et al. (2017) develop a method similar to EWC that maintains estimates of the importance of weights for past tasks, Li & Hoiem (2017) leverage a mixture of task-specific and shared parameters, and Milan et al. (2016) develop a rigorous Bayesian approach for estimating unknown task boundaries. Notably all these methods assume that task identities or boundaries are known, with the exception of Milan et al. (2016), for which the approach is likely not scalable to highly complex tasks. Rehearsing old data via experience replay buffers is a common technique in RL. However, their introduction has primarily been driven by the goal of data-efficient learning on single tasks (Lin, 1992; Mnih et al., 2015; Gu et al., 2017). Research in this vein has included prioritized replay for maximizing the impact of rare experiences (Schaul et al., 2016), learning from human demonstration data seeded into a buffer (Hester et al., 2017), and methods for approximating replay buffers with generative models (Shin et al., 2017). A noteworthy use of experience replay buffers to protect against catastrophic forgetting was demonstrated in Isele & Cosgun (2018) on toy tasks, with a focus on how buffers can be made smaller. Previous works (Gu et al., 2017; O’Donoghue et al., 2016; Wang et al., 2016) have explored mixing on- and off-policy updates in RL, though these were focused on speed and stability in individual tasks and did not examine continual learning. Here, in CLEAR, we demonstrate that a mixture of replay data and fresh experience protects against catastrophic forgetting while also permitting fast learning, and performs better than either pure onpolicy learning or pure off-policy learning from replay. We provide a thorough investigation and robust algorithm in CLEAR, conducting a wide variety of tests on both limited-size and unbounded buffers with complex RL tasks using state-of-the-art methods, and improve the stability of simple replay with the addition of behavioral cloning. 3 THE CLEAR METHOD CLEAR uses actor-critic training on a mixture of new and replayed experiences. In the case of replay experiences, two additional loss terms are added to induce behavioral cloning between the network and its past self. The motivation for behavioral cloning is to prevent network output on replayed tasks from drifting while learning new tasks. We penalize (1) the KL divergence between the historical policy distribution and the present policy distribution, (2) the L2 norm of the difference between the historical and present value functions. Formally, this corresponds to adding the following loss functions, defined with respect to network parameters θ: Lpolicy-cloning := ∑ a µ(a|hs) log µ(a|hs) πθ(a|hs) , Lvalue-cloning := ||Vθ(hs)− Vreplay(hs)||22, where πθ denotes the (current) policy of the network over actions a, µ the policy generating the observed experience, and hs the hidden state of the network at time s. Note that computing KL[µ||πθ] instead of KL[πθ||µ] ensures that πθ(a|hs) is nonzero wherever the historical policy is as well. We apply CLEAR in a distributed training context based on the Importance Weighted Actor-Learner Architecture (Espeholt et al., 2018). A single learning network is fed experiences (both novel and replay) by a number of acting networks, for which the weights are asynchronously updated to match those of the learner. The network architecture and hyperparameters are chosen as in Espeholt et al. (2018). Training proceeds according to V-Trace. Namely, define the V-Trace target vs by: vs := V (hs) + s+n−1∑ t=s γt−s ( t−1∏ i=s ci ) δtV, where δtV := ρt (rt + γV (ht+1)− V (ht)), ci := min(c̄, πθ(ai|hi)µ(ai|hi) ), and ρt = min(ρ̄, πθ(at|ht) µ(at|ht) ), for constants c̄ and ρ̄. Then, the value function update is given by the L2 loss: Lvalue := (Vθ(hs)− vs)2 . The policy gradient loss is: Lpolicy-gradient := −ρs log πθ(as|hs) (rs + γvs+1 − Vθ(hs)) . We also use an entropy loss: Lentropy := ∑ a πθ(a|hs) log πθ(a|hs). The loss functions Lvalue, Lpolicy-gradient, and Lentropy are applied both for new and replay experiences. In addition, we add Lpolicy-cloning and Lvalue-cloning for replay experiences only. In general, our experiments use a 50-50 mixture of novel and replay experiences, though performance does not appear to be very sensitive to this ratio. Further implementation details are given in Appendix A. 4 RESULTS 4.1 CATASTROPHIC FORGETTING VS. INTERFERENCE Our first experiment (Figure 1) was designed to distinguish between two distinct concepts that are sometimes conflated, interference and catastrophic forgetting, and to emphasize the outsized role of the latter as compared to the former. Interference occurs when two or more tasks are incompatible (destructive interference) or mutually helpful (constructive interference) within the same model. Catastrophic forgetting occurs when a task’s performance goes down not as a result of incompatibility with another task but as a result of the second task overwriting it within the model. As we aim to illustrate, the two are independent phenomena, and while interference may happen, forgetting is ubiquitous. We considered a set of three distinct tasks within the DMLab set of environments (Beattie et al., 2016), and compared three training paradigms on which a network may be trained to perform these three tasks: (1) Training networks on the individual tasks separately, (2) training a single network examples from all tasks simultaneously (which permits interference among tasks), and (3) training a single network sequentially on examples from one task, then the next task, and so on cyclically. Across all training protocols, the total amount of experience for each task was held constant. Thus, for separate networks training on separate tasks, the x-axis in our plots shows the total number of environment frames summed across all tasks. For example, at three million frames, one million were on task 1, one million on task 2, and one million on task 3. This allows a direct comparison to simultaneous training, in which the same network was trained on all three tasks. We observe that in DMLab, there is very little difference between separate and simultaneous training. This indicates minimal interference between tasks. If anything, there is a small amount of constructive interference, with simultaneous training performing slightly better than separate training. We assume this is a result of (i) commonalities in image processing required across different tasks, and (ii) certain basic exploratory behaviors, e.g., moving around, that are advantageous across tasks. (By contrast, destructive interference might result from incompatible behaviors or from insufficient model capacity.) By contrast, there is a large difference between either of the above modes of training and sequential training, where performance on a task decays immediately when training switches to another task – that is, catastrophic forgetting. Note that the performance of the sequential training appears at some points to be greater than that of separate training. This is purely because in sequential training, training proceeds exclusively on a single task, then exclusively on another task. For example, the first task quickly increases in performance since the network is effectively seeing three times as much data on that task as the networks training on separate or simultaneous tasks. 4.2 CLEAR We here demonstrate the efficacy of CLEAR for diminishing catastrophic forgetting (Figure 2). We apply CLEAR to the cyclically repeating sequence of DMLab tasks used in the preceding experiment. Our method effectively eliminates forgetting on all three tasks, while preserving overall training performance (see “Sequential” training in Figure 1 for reference). When the task switches, there is little, if any, dropoff in performance when using CLEAR, and the network picks up immediately where it left off once a task returns later in training. Without behavioral cloning, the mixture of new experience and replay still reduces catastrophic forgetting, though the effect is reduced. 4.3 BALANCE OF ON- AND OFF-POLICY LEARNING In this experiment (Figure 3), we consider the ratio of new examples to replay examples during training. Using 100% new examples is simply standard training, which as we have seen is subject to dramatic catastrophic forgetting. At 75-25 new-replay, there is already significant resistance to forgetting. At the opposite extreme, 100% replay examples is extremely resistant to catastrophic forgetting, but at the expense of a (slight) decrease in performance attained. We believe that 50- 50 new-replay represents a good tradeoff, combining significantly reduced catastrophic forgetting with no appreciable decrease in performance attained. Unless otherwise stated, our experiments on CLEAR will use a 50-50 split of new and replay data in training. It is notable that it is possible to train purely on replay examples, since the network has essentially no on-policy learning. In fact, the figure shows that with 100% replay, performance on each task increases throughout, even when on-policy learning is being applied to a different task. Just as Figure 2 shows the importance of behavioral cloning for maintaining past performance on a task, so this experiment shows that off-policy learning can actually increase performance from replay alone. Both ingredients are necessary for the success of CLEAR. 4.4 LIMITED-SIZE BUFFERS In some cases, it may be impractical to store all past experiences in the replay buffer. We therefore test the efficacy of buffers that have capacity for only a relatively small number of experiences (Figure 4). Once the buffer is full, we use reservoir sampling to decide when to replace elements of the buffer with new experiences (Isele & Cosgun, 2018) (see details in Appendix A). Thus, at each point in time, the buffer contains a (fixed size) sample uniformly at random of all past experiences. We consider a sequence of tasks with 900 million environmental frames, comparing a large buffer of capacity 450 million to two small buffers of capacity 5 and 50 million. We find that all buffers perform well and conclude that it is possible to learn and reduce catastrophic forgetting even with a replay buffer that is significantly smaller than the total number of experiences. Decreasing the buffer size to 5 million results in a slight decrease in robustness to catastrophic forgetting. This may be due to over-fitting to the limited examples present in the buffer, on which the learner trains disproportionately often. 4.5 LEARNING A NEW TASK QUICKLY It is a reasonable worry that relying on a replay buffer could cause new tasks to be learned more slowly as the new task data will make up a smaller and smaller portion of the replay buffer as the buffer gets larger. In this experiment (Figure 5), we find that this is not a problem for CLEAR, relying as it does on a mixture of off- and on-policy learning. Specifically, we find the performance attained on a task is largely independent of the amount of data stored in the buffer and on the identities of the preceding tasks. We consider a cyclically repeating sequence of three DMLab tasks. At different points in the sequence, we insert a fourth DMLab task as a “probe”. We find that the performance attained on the probe task is independent of the point at which it is introduced within the training sequence. This is true both for normal training and for CLEAR. Notably, CLEAR succeeds in greatly reducing catastrophic forgetting for all tasks, and the effect on the probe task does not diminish as the probe task is introduced later on in the training sequence. See also Appendix B for an experiment demonstrating that pure off-policy learning performs quite differently in this setting. 4.6 COMPARISON TO P&C AND EWC Finally, we compare our method to Progress & Compress (P&C) (Schwarz et al., 2018) and Elastic Weight Consolidation (EWC) (Kirkpatrick et al., 2017), state-of-the-art methods for reducing catastrophic forgetting that, unlike replay, assume that the boundaries between different tasks are known (Figure 6). We use exactly the same sequence of Atari tasks as the authors of P&C (Schwarz et al., 2018), with the same time spent on each task. Likewise, the network and hyperparameters we use are designed to match exactly those used in Schwarz et al. (2018). This is simplified by the authors of P&C also using a training paradigm based on that in Espeholt et al. (2018). In this case, we use CLEAR with a 75-25 balance of new-replay experience. We find that we obtain comparable performance to P&C and better performance than EWC, despite CLEAR being significantly simpler and agnostic to the boundaries between tasks. On tasks krull, hero, and ms pacman, we obtain significantly higher performance than P&C (as well as EWC), while on beam rider and star gunner, P&C obtains higher performance. It is worth noting that though the on-policy model (baseline) experiences significant catastrophic forgetting, it also rapidly re-acquires its previous performance after re-exposure to the task; this allows baseline to be cumulatively better than EWC on some tasks (as is noted in the original paper Kirkpatrick et al. (2017)). An alternative plot of this experiment, showing cumulative performance on each task, is presented in Appendix C. 5 DISCUSSION Some version of replay is believed to be present in biological brains. We do not believe that our implementation is reflective of neurobiology, though there are potential connections; hippocampal replay has been proposed as a systems-level mechanism to reduce catastrophic forgetting and improve generalization as in the theory of complementary learning systems (McClelland, 1998). This contrasts to some degree with synapse-level consolidation, which is also believed to be present in biology (Benna & Fusi, 2016), but is more like continual learning methods that protect parameters. Indeed, algorithms for continual learning may live on a Pareto frontier: different methods may have different regimes of applicability. In cases for which storing a large memory buffer is truly prohibitive, methods that protect inferred parameters, such as Progress & Compress, may be more suitable than replay methods. When task identities are available or boundaries between tasks are very clear, leveraging this information may reduce memory or computational demands or be useful to alert the agent to engage in rapid learning. Further, there exist training scenarios that are adversarial either to our method or to any method that prevents forgetting. For example, if the action space of a task were changed during training, fitting to the old policy’s action distribution, whether through behavioral cloning, off-policy learning, weight protection, or any of a number of other strategies for preventing catastrophic forgetting, could have a deleterious effect on future performance. For such cases, we may need to develop algorithms that selectively protect skills as well as forget them. We have explored CLEAR in a range of continual learning scenarios; we hope that some of the experimental protocols, such as probing with a novel task at varied positions in a training sequence, may inspire other research. Moving forward, we anticipate many algorithmic innovations that build on the ideas set forward here. For example, weight-consolidation techniques such as Progress & Compress are quite orthogonal to our approach and could be married with it for further performance gains. Moreover, while the V-Trace algorithm we use is effective at off-policy correction for small shifts between the present and past policy distributions, it is possible that off-policy approaches leveraging Q-functions, such as Retrace (Munos et al., 2016), may prove more powerful still. We have described a simple but powerful approach for preventing catastrophic forgetting in continual learning settings. CLEAR uses on-policy learning on fresh experiences to adapt rapidly to new tasks, while using off-policy learning with behavioral cloning on replay experience to maintain and modestly enhance performance on past tasks. Behavioral cloning on replay data further enhances the agent’s stability. Our method is simple, scalable, and practical; it takes advantage of the general abundance of memory and storage in modern computers and computing facilities. We believe that the broad applicability and simplicity of the approach make CLEAR a candidate “first line of defense” against catastrophic forgetting in many RL contexts. A.1 DISTRIBUTED SETUP Our training setup was based on that of Espeholt et al. (2018), with multiple actors and a single learner. The actors (which run on CPU) generate training examples, which are then sent to the learner. Weight updates made by the learner are propagated asynchronously to the actors. The workflows for each actor and for the learner are described below in more detail. Actor. A training episode (unroll) is generated and inserted into the actor’s buffer. Reservoir sampling is used (see further details below) if the buffer has reached its maximum capacity. The actor then samples another unroll from the buffer. The new unroll and replay unroll are both fed into a queue of examples that are read by the learner. The actor waits until its last example in the queue has been read before creating another. Learner. Each element of a batch is a pair (new unroll, replay unroll) from the queue provided by actors. Thus, the number of new unrolls and the number of replay unrolls both equal the entire batch size. Depending on the buffer utilization hyperparameter (see Figure 3), the learner uses a balance of new and replay examples, taking either the new unroll or the replay unroll from each pair. Thus, no actor contributes more than a single example to the batch (reducing the variance of batches). A.2 NETWORK For our DMLab experiments, we used the same network as in the DMLab experiments of Espeholt et al. (2018). We selected the shallower of the models considered there (a network based on Mnih et al. (2015)), omitting the additional LSTM module used for processing textual input since none of the tasks we considered included such input. For Atari, we used the same network in Progress & Compress (Schwarz et al., 2018) (which is also based on Espeholt et al. (2018)), also copying all hyperparameters. A.3 BUFFERS Our replay buffer stores all information necessary for the V-Trace algorithm, namely the input presented by the environment, the output logits of the network, the value function output by the network, the action taken, and the reward obtained. Leveraging the distributed setup, the buffer is split among all actors equally, so that, for example, if the total buffer size were one million across a hundred actors, then each actor would have buffer capacity of ten thousand. All buffer sizes are measured in environment frames (not in numbers of unrolls), in keeping with the x-axis of our training plots. For baseline experiments, no buffer was used, while all other parameters and the network remained constant. Unless otherwise specified, the replay buffer was capped at half the number of environment frames on which the network is trained. This is by design – to show that even past the buffer capacity, replay continues to prevent catastrophic forgetting. When the buffer fills up, then new unrolls are added by reservoir sampling, so that the buffer at any given point contains a uniformly random sample of all unrolls up until the present time. Reservoir sampling is implemented as in Isele & Cosgun (2018) by having each unroll associated with a random number between 0 and 1. A threshold is initialized to 0 and rises with time so that the number of unrolls above the threshold is fixed at the capacity of the buffer. Each unroll is either stored or abandoned in its entirety; no unroll is partially stored, as this would preclude training. A.4 TRAINING Training was conducted using V-Trace, with hyperparameters on DMLab/Atari tasks set as in Espeholt et al. (2018). Behavioral cloning loss functions Lpolicy-cloning and Lvalue-cloning were added in some experiments with weights of 0.01 and 0.005, respectively. The established loss functions Lpolicy-gradient, Lvalue, and Lentropy were applied with weights of 1, 0.5, and ≈0.005, in keeping with Espeholt et al. (2018). No significant effort was made to optimize fully the hyperparameters for CLEAR. A.5 EVALUATION We evaluate each network during training on all tasks, not simply that task on which it is currently being trained. Evaluation is performed by pools of testing actors, with a separate pool for each task in question. Each pool of testing actors asynchronously updates its weights to match those of the learner, similarly to the standard (training) actors used in our distributed learning setup. The key differences are that each testing actor (i) has no replay buffer, (ii) does not feed examples to the learner for training, (iii) runs on its designated task regardless of whether this task is the one currently in use by training actors. A.6 EXPERIMENTS In many of our experiments, we consider tasks that change after a specified number of learning episodes. The total number of episodes is monitored by the learner, and all actors switch between tasks simultaneously at the designated point, henceforward feeding examples to the learner based on experiences on the new task (as well as replay examples). Each experiment was run independently three times; figures plot the mean performance across runs, with error bars showing the standard deviation. B PROBE TASK WITH 100% REPLAY Our goal in this experiment was to investigate more thoroughly than Figure 3 to what extent the mixture of on- and off-policy learning is necessary, instead of pure off-policy learning, in learning new tasks swiftly. We rerun our “probe task” experiments (Section 4.5), where the DMLab task natlab varying map randomized is presented at different positions in a cyclically repeating sequence of other DMLab tasks. In this case, however, we use CLEAR with 100% (off-policy) replay experience. We observe that, unlike in the original experiment (Figure 5), the performance obtained on the probe task natlab varying map randomized deteriorates markedly as it appears later in the sequence of tasks. For later positions in the sequence, the probe task comprises a smaller percentage of replay experience, thereby impeding purely off-policy learning. This result underlines why CLEAR uses new experience, as well as replay, to allow rapid learning of new tasks. C FIGURES REPLOTTED ACCORDING TO CUMULATIVE SUM In this section, we replot the results of our main experiments, so that the y-axis shows the mean cumulative reward obtained on each task during training; that is, the reward shown for time t is the average (1/t) ∑ s<t rs. This makes it easier to compare performance between models, though it smoothes out the individual periods of catastrophic forgetting. We also include tables comparing the values of the final cumulative rewards at the end of training.
1. What is the main contribution of the paper in the field of reinforcement learning? 2. What are the strengths of the proposed approach, particularly in addressing catastrophic forgetting? 3. What are the weaknesses of the paper, especially regarding its experimental setup and applicability to real-world scenarios? 4. How does the reviewer assess the novelty and significance of the proposed method? 5. Are there any concerns or questions regarding the theoretical analysis or implementation of the approach?
Review
Review The authors propose an approach to augment experience replay buffers with properties that can alleviate issues with catastrophic forgetting. The buffers are augmented by storing both new and historical experiences, along with the desired historical policy & value distribution. The AC learning now couples two additional losses that ensures the new policy does not drift away from old actor distribution (via KL) and new value does not drift away from old critic distribution (via L2 loss). The authors provided clear experimental evidence that shows how an RL agent that does not use CLEAR will observe catastrophic when we sequentially train different tasks (and it is not due to destructive interference using the simultaneous and separate training/evaluation experiments). Author also showed how different replay make ups can change the result of CLEAR (and it's a matter of empirical tuning). The formulation of CLEAR also is simple while delivering interesting results. It would have been nice to see how this is used in a practical setting as all these are synthetic environments / tasks. The discussion on relationship with biological mechanism also seems unnecessary as it's unclear whether the mechanism proposed is actually what's in the CLS.
ICLR
Title Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds Abstract In the present work we study classifiers’ decision boundaries via Brownian motion processes in ambient data space and associated probabilistic techniques. Intuitively, our ideas correspond to placing a heat source at the decision boundary and observing how effectively the sample points warm up. We are largely motivated by the search for a soft measure that sheds further light on the decision boundary’s geometry. En route, we bridge aspects of potential theory and geometric analysis (Maz’ya (2011); Grigor’Yan & Saloff-Coste (2002)) with active fields of ML research such as adversarial examples and generalization bounds. First, we focus on the geometric behavior of decision boundaries in the light of adversarial attack/defense mechanisms. Experimentally, we observe a certain capacitory trend over different adversarial defense strategies: decision boundaries locally become flatter as measured by isoperimetric inequalities (Ford et al. (2019)); however, our more sensitive heat-diffusion metrics extend this analysis and further reveal that some non-trivial geometry invisible to plain distance-based methods is still preserved. Intuitively, we provide evidence that the decision boundaries nevertheless retain many persistent "wiggly and fuzzy" regions on a finer scale. Second, we show how Brownian hitting probabilities translate to soft generalization bounds which are in turn connected to compression and noise stability (Arora et al. (2018)), and these bounds are significantly stronger if the decision boundary has controlled geometric features. 1 INTRODUCTION AND BACKGROUND The endeavor to understand certain geometric aspects of decision problems has lead to intense research in statistical learning. These range from the study of data manifolds, through landscapes of loss functions to the delicate analysis of a classifier’s decision boundary. In the present work we focus on the latter. So far, a wealth of studies has analyzed the geometry of decision boundaries of deep neural networks (DNN), reaching profound implications in the fields of adversarial machine learning (adversarial examples), robustness, margin analysis and generalization. Inspired by recent isoperimetric results and curvature estimates (Ford et al. (2019); Moosavi-Dezfooli et al. (2019); Fawzi et al. (2016)), we attempt to provide some new aspects of decision boundary analysis by introducing and studying a corresponding diffusion-inspired approach. In this note the guiding idea is to place a heat source at the classifier’s decision boundary and estimate its size/shape in terms of the amount of heat the boundary is able to emit within a given time (Fig. 1). The goal is to extract geometric information from the behavior of heat transmission. This technique of heat content seems well-known within capacity/potential theory and has led to a variety of results in spectral analysis relating heat diffusion and geometry, Jorgenson & Lang (2001); Grigor’Yan & Saloff-Coste (2002); Maz’ya (2011). However, working with such heat diffusion directly in terms of the corresponding differential equations is impractical. To this end, we note that, due to Feynman-Kac duality, the heat estimates are convertible to Brownian motion hitting probabilities. Thus we circumvent the need for solving intractable differential equations and instead are able to employ a straightforward Monte-Carlo sampling scheme in the ambient data space (Section 3). Background on defense training We apply the above analysis in the context of adversarial machine learning (Section 4) where one studies the interaction between an adversary and a ML system. One of the goals of the subject is to design attack/defense training strategies improving the robustness of a given ML model - in the present work we are interested in how adversarial/noise defense training are reflected geometrically. Many different metrics to estimate robustness have been proposed: on one hand, there is adversarial robustness (the probability that error samples lie very near a given data point x); on the other hand, there is corruption robustness (the probability of getting an error sample after perturbing a given data point x with some specified noise). In our context, heat diffusion naturally suggests a capacitory robustness metric: this metric is built upon the probability that Brownian motion started at a given data point x will hit error samples within a given time window. One can perceive this metric as a combination of adversarial and noise robustness (Brownian motion has continuous paths and specified stopping time determined by boundary impact). In this perspective, our work is aligned with studies of other robustness metrics and curvature results (cf. Fawzi et al. (2016) for a "semi-random" projection robustness and relations to curvature). We study the capacitory metric on the well-known CIFAR10 and MNIST datasets and observe that defense training techniques may either yield a certain (although not substantial) decrease (noise training) or fail to have a significant effect on continuous Brownian attacks overall. Surprisingly, in both cases the studied capacitory metric does not converge to the corresponding value as in the case of a flat decision boundary. Due to our comparison statements and curvature considerations, this means that locally around clean data points the geometry is in general flattened out but may still retain complexity and substantial areas of (small) non-vanishing curvature. In other words, from the point of view of our heat diffusion metrics, decision boundaries locally exhibit non-flat behaviour. Background on generalization estimates Finally, we observe that the collected heat/hittingprobability metrics can further be used to obtain generalization bounds where, in a nutshell, one evaluates the performance of a model on unseen data in terms of the performance over a given sampled data, the model’s expressiveness, dimension, etc. In this regard, we view decision boundary heat diffusion traits as an indicator of how noise-stable a given model is - this relates Brownian hitting bounds with recent compression-based generalization techniques in the spirit of Arora et al. (2018); Suzuki et al. (2018; 2020). More precisely, we proceed in two steps: first, we construct a "smaller" compressed model that is almost equivalent to the initial one in an appropriate heat-theoretic way; second, we obtain generalization estimates for the smaller model in terms of the decision boundary hitting probabilities (computed on the empirical dataset). Furthermore, the bounds are significantly improved under additional geometric assumptions on the decision boundary of the initial model. Additional related work The interplay between heat diffusion and geometry lies at the heart of many topics in geometric analysis and spectral theory (cf. Jorgenson & Lang (2001); Grigor’Yan (2001) for a far reaching overview). Some direct applications of heat diffusion techniques to zero sets of eigenfunctions are seen, for example, in Steinerberger (2014); Georgiev & Mukherjee (2018a;b). The literature on adversarial ML is vast: to name a few central works in the field, we refer to Dalvi et al. (2004); Biggio & Roli (2018); Szegedy et al. (2014). Much effort has been invested in designing and understanding strategies that will render a model robust to various attacks (e.g. Madry et al. (2018); Carlini & Wagner (2017)). In particular, the geometry of decision boundaries has been the focus of many works in the subject leading to breakthroughs in curvature estimates, boundary flatness and robustness, schemes for detecting boundary complexity, proposing adversarial attacks/defenses and diffusion based techniques towards constructing decision boundary from partially pre-labelled data (e.g. Ford et al. (2019); Fawzi et al. (2016; 2017; 2018); Dezfooli et al. (2018); Moosavi-Dezfooli et al. (2019); Karimi et al. (2019); Karimi & Tang (2020); He et al. (2018); Szlam et al. (2008)). The theory of generalization bounds has formed a classical main line of ML and statistical inference research (Vapnik (1999)). In this direction central questions address the generalization properties of heavily over-parametrized deep neural network models. According to some classical VC-dimension results such models should overfit the data and generalize poorly. Extensive research effort has been invested in developing appropriate sharper techniques to explain generalization of DNN models: on one hand there are the methods based on norm estimation whose bounds are not explicitly using the number of the network’s parameters (see Golowich et al. (2019); Neyshabur et al. (2015; 2018); Wei & Ma (2019); Bartlett et al. (2017), etc). On the other hand, recent results based on compression and VC-dimension can lead to sharper bounds (Arora et al. (2018); Suzuki et al. (2018; 2020)). 2 CONTRIBUTIONS, CONTEXT AND PAPER OUTLINE An outline of our essential contributions is given as follows: 1. We analyze decision boundary geometries in terms of novel heat diffusion and Brownian motion techniques with thorough theoretical estimates on curvature and flattening. 2. We show, both theoretically and empirically (in terms of adversarial scenarios on stateof-art DNN models), that the proposed heat diffusion metrics detect the curvature of the boundary; they complement, and in some respects are more sensitive in comparison to previous methods of boundary analysis - intuitively, our heat driven metrics are sharper on a finer scale and can detect small-scale "wiggles and pockets". As an application, we are thus able to provide evidence that adversarial defenses lead to overall flatter boundaries but, surprisingly, the heat traits do not converge to the corresponding flat-case, and hence, finer-scale non-linear characteristics (e.g. "wiggles and pockets") are persistent. 3. Moreover, the preservation of "wiggles and pockets" means that susceptibility to naive Brownian motion attacks is not significantly decreased via adversarial defense mechanisms. 4. Finally, we introduce a novel notion of compression based on heat diffusion and prove that stability of heat signature translates to compression properties and generalization capabilities. In terms of context, the present note is well-aligned with works such as Ford et al. (2019); Dezfooli et al. (2018); Fawzi et al. (2016; 2018). Among other aspects, these works provide substantial analysis of the interplay between geometry/curvature and adversarial robustness/defenses - in particular, we use some of the these tools (e.g. isoperimetric saturation) as benchmarks and sanity checks. However, in contrast, in our work we provide a non-equivalent technique to address decision boundary geometry for which we provide an extensive theoretical and empirical evaluation with insights on the preservation of finer-scale traits. Intuitively, previous distance-based geometric methods could be considered as a "coarser lens", whereas the present heat-diffusion tools appear to be much more sensitive. As a large-scale example, Brownian particles emanating from a point are able to distinguish between a decision boundary which is a hyperplane at distance d and a decision boundary which is a cylinder of radius d wrapping around the point. Our notion of compression is inspired by Arora et al. (2018), and establishes a connection between the Johnson-Lindenstrauss dimension reduction algorithm with diffusion techniques. Furthermore, we bridge the proposed heat-theoretic techniques with generalization bounds in the spirit of Arora et al. (2018); Suzuki et al. (2020). In particular, this shows that overall lower heat quantities at sample points imply better generalization traits. A step-wise road map of the present work is given below: • (Subsection 3.1) We start by discussing what heat diffusion is and how it is to be evaluated - here we discuss that, via Feynman-Kac duality, one can essentially work with Brownian motion hitting probabilities. • (Subsections 3.2 and 3.3) We introduce the isocapacitory saturation τ - a heat-theoretic metric that will be used to estimate boundary flatness. Moreover, here we emphasize the properties of τ such as relations to curvature (Proposition 3.1) and the novel information obtained from heat theoretic methods in comparison to previous distance-based ones. • (Subsection 3.4) We compute τ for certain geometric model cases such as hyperplanes, cones, wedges and "spiky" sets (Lemmas 3.2 and 3.3). This allows us later to evaluate how much a given geometry resembles these model cases. • (Section 4) Next, we are in a position to evaluate and compare τ for decision boundaries of DNNs. We experimentally illustrate the effect of adversarial defense mechanisms and noise robustness on τ (PGD/FGSM on MNIST and CIFAR-10). • (Section 5) We prove that heat transmission relates to generalization bounds (Propositions 5.1 and 5.2) - in particular, lower levels of heat at sample points yield sharper generalization bounds. Finally, we complete the discussion by informally stating our compression scheme. • (Appendix) Our methods leverage several tool sets extensively. For this reason our goal in the main text is to only collect and showcase the techniques and results. However, the thorough in-depth analysis is provided in the Appendix where the reader can find all relevant proofs and further background and references. 3 MOTIVATION AND MAIN IDEAS 3.1 GEOMETRY SEEN THROUGH BROWNIAN MOTION AND DIFFUSION Notation Let us consider a dataset X := {(xi, yi)}mi=1 consisting of feature points xi ∈ Rn and their corresponding labels y ∈ {1, . . . , k}. Let us suppose that a k-label classifier f : Rn → Rk labels a point x ∈ X as arg maxi∈[1,k] f(x)[i]. The decision boundary of f is given by N := {x ∈ Rn|f(x) has two or more equal coordinates} (cf. Fig. 2). Assuming f is sufficiently regular, one thinks of N as a collection of hypersurfaces in Rn. Further, for a given target label y we define the target (error) set E(y) as the set of points on which the classifier’s decision is different from y, i.e. E(y) := {x ∈ Rn| arg maxi∈[1,k] f(x)[i] 6= y} (here we remark that if arg max is set-valued at x with several coordinates obtaining the maximum value, then by convention x is contained in E(y)). Clearly, if a given data sample (x0, y0) ∈ X is correctly classified by f , then x0 is outside of the error set E(y0). Finally, we note that the boundary of E(y) coincides with E(y) ∩N and moreover, N is the union of the boundaries of E(y) for all labels y. Feynman-Kac duality and hitting probabilities As mentioned in Section 1 we wish to study a heat diffusion process where we place a heat source at the decision boundary N : formally, this is given by a heat equation with appropriate initial and boundary conditions (Appendix, Subsection A.2). Avoiding the impracticality of working with the differential equations directly, we bring forward the theorem of Feynman-Kac that relates the solution of the diffusion process to hitting probabilities of Brownian motion (Appendix, Subsection A.3). By way of notation, for an open set U ⊆ Rn, let ψU (x, t) denote the probability that a Brownian particle starting at the point x will enter U within time t. In other words, ψU (x, t) := Pω∼W [∃ t0 ∈ [0, t] | ω(t0) ∈ U ] , x ∈ X , (1) where ω denotes a Brownian motion defined over the interval [0, t] that follows the standard Euclidean Wiener distribution. The amount of heat that a point x receives from N within time t is comparable to the hitting probability that a Brownian particle starting at x will impact the boundary within time t (cf. Fig. 2). Provided that x is correctly classified this is equivalent to the probability of impacting the decision boundary. In general, we evaluate ψE(y)(x, t) (which we often denote by ψ(x, t) by minor abuse of notation) through direct sampling; however, in some model cases, e.g. E(y) being a half-space, a spherical shell or a conical set, ψ(x, t) has a concise closed form (Subsection 3.4 below) that can be evaluated analytically. This allows us to easily measure deviations and compare the heat imprint of N to particular model cases. Local analysis and set-up As mentioned above our analysis is local. For each clean data point x we consider a ball B(x, r) centered at x with radius r and perform all our computations there. In particular, a free Brownian motion starting at x and defined over a maximal time interval [0, t] will on average travel a distance of √ nt (Appendix, Subsection A.1). This suggests to couple r and the maximal Brownian running time t via r = √ nt (cf. Fig. 2), so that, if not stopped by boundary impact, Brownian motion will, on average, reach the sphere ∂B(x, r) by its maximal stopping time. 3.2 AN ISOPERIMETRIC AND ISOCAPACITORY PERSPECTIVE Isoperimetric results Isoperimetric estimates will be the starting baseline (Ford et al. (2019)) to detect low levels of curvature and boundary flatness. For some background in isoperimetric results we refer to (Appendix, Subsection A.4). Let us start by defining the relative error volume µ(x, r) := Vol(E(y) ∩B(x, r)) Vol(B(x, r)) . (2) We recall the so-called Gaussian isoperimetric inequality Borell (1975); Ford et al. (2019): d̃ ≤ −rΦ −1(µ)√ n , µ ≤ 1/2, (3) where Φ−1 denotes the inverse standard normal c.d.f. and where d̃ = d(x̃,Nf ) denotes the median distance with x̃ varying normally and concentrated in the ball B(x, r), and d̃ = 0 if µ ≥ 1/2. Here the isoperimetric result is rigid in the sense that equality in (3) occurs only if E(y) is a half-space. In Ford et al. (2019) the authors demonstrate that defense training mechanisms lead to decision boundaries that saturate this isoperimetric inequality, i.e. in this isoperimetric sense, the decision boundary N becomes locally closer to being a flat hyperplane. We define the ratio between the LHS and RHS in eq. (3) as the isoperimetric saturation. Isocapacitory results In our context of hitting probabilities (eq. (1)), results in potential theory allows us to prove isocapacitory bounds which are similar in spirit to isoperimetric bounds. More precisely one has: µ(x, r) ≤ cn ψ(x, t) n n−2 , (4) where cn is an appropriate constant depending on the dimension n, and r = √ nt. The proof relies on potential theory tools (capacity) and can be found in Appendix, Proposition A.3. Motivated by the above isoperimetric saturation results, one of our main goals is to study how µ compares to ψ(x, t). To this end we define the isocapacitory saturation τ as τ(x, r) := ψ(x, t) n n−2 µ(x, r) . (5) The basic guiding heuristic is that high values of τ indicate that E(y) has a very low volume in comparison to its boundary size and respective heat emission. This is the case whenever E(y) is a very thin region with a well-spread boundary of large surface area - e.g. a set that resembles thin spikes entering the ball B(x, r). In contrast, lower values of τ should indicate a saturation of the isocapacitory inequality (4) and imply that E(y) has a volume that is more comparable to its heat emission - e.g. thicker sets with tamer boundary. To quantify this intuition, we explicitly evaluate τ for some model scenarios (Subsection 3.4). 3.3 THE NOVEL INFORMATION GIVEN BY HEAT DIFFUSION Distances vs. hitting probabilities As discussed above, several works investigate decision boundaries in terms of distance-based analysis (Ford et al. (2019); Fawzi et al. (2016); Karimi & Tang (2020); Karimi et al. (2019)). We remark that our analysis based on hitting probabilities augments and extends the mentioned distance-based approaches. Although related, the two concepts are not equivalent. A guiding example is given by E(y) being a dense collection of "thin needles" (Appendix, Subsections A.4, A.5); in such a scenario the average distance to N is very small, as well as the chance a Brownian particle will hit N . On the other hand, if N is a dense collection of hyperplanes, the average distance toN is again small, but Brownian motions almost surely will hitN . In this sense, evaluating hitting probabilities yields a different perspective than is available from distance-based analysis and sheds further light on the size and shape of the decision boundary, particularly with regards to its capacity and curvature features. Isoperimetric vs. isocapacitory saturation Another demonstration of the additional information obtained through τ is given by almost flat shapes in higher dimensions that saturate isoperimetric bounds (Appendix, Subsection A.4). In these scenarios small geometric deformations can have a significant impact on τ , and at the same time almost preserve isoperimetric bounds. In other words τ provides an additional level of geometric sensitivity. We discuss this further in Section 4. The effect of curvature The interplay between curvature of the decision boundary and robustness has been well studied recently, e.g. Fawzi et al. (2016); Moosavi-Dezfooli et al. (2019) where various forms of robustness (adversarial, semi-random and their ratio) have been estimated in terms of the decision boundary’s curvature. Intuitively, the differential geometric notion of curvature measures how a certain shape is bent. The precise definition of curvature involves taking second-order derivatives which is in most cases impractical. However, in our context we show that the isocapacitory saturation τ implies certain curvature bounds. These statements exploit relations between curvature and volume and lead to pointwise and integral curvature bounds. As an illustration, we have: Proposition 3.1 (Informal). Let (x, y) ∈ X be a data sample. Then, provided that the distance d(x,N ) is kept fixed, larger values of τ locally imply larger pointwise/integral curvature values. A deeper analysis with formal statements and additional details are provided in Appendix, Subsection A.6. The advantages that curvature yields for some types of compression schemes and generalization bounds is also intensely investigated in Appendix, Section B. 3.4 MODEL DECISION BOUNDARIES: HYPERPLANES, WEDGES, CONES AND “SPIKY” SETS Given a certain geometric shape, one is often faced with questions as to how flat or spherical the given geometry is. To this end, a central technique in geometric analysis is comparing to certain model cases - e.g. a sphere, plane, saddle, etc. After having introduced τ and its basic traits we now evaluate it for several model cases (flat hyperplanes, wedges, cones, balls and "spiky" sets). Each of these model cases illustrates a distinguished τ -behaviour: from "tame" behaviour (hyperplanes, balls) to explosion (thin cylinders, "needles and spiky" sets). Hence, having comparisons to these model cases and given an decision boundary, one can, quantify how far away is the given surface from being one of the models. We start by discussing the flat linear case: Lemma 3.2. Let (x, y) be a data sample and suppose that E(y) forms a half-space at a distance d from the given data point x ∈ Rn. Then τ(x, r) = 2 Φ ( − d√ t ) Vol (B(x, r)) Vn(d, r) , (6) where Φ(s) is the c.d.f. for the standard normal distribution, and Vn(d, r) is the volume of the smaller n-dimensional solid spherical cap cut-off at distance d from the center of a ball of radius r. The computation uses standard reflection principle techniques. Figure 3 depicts an experimental discussion on Lemma 3.2. Another illuminating model is given by a "spiky" set - e.g. a thin cylinder, which is in some sense the other extreme. We have Lemma 3.3 (Appendix, Subsection A.5). Suppose that E(y) is a cylinder of height h and radius ρ that enters the ball B(x, r). Then τ ↗∞ as ρ↘ 0. Further comparison results for additional model cases are given in Appendix, Subsection A.5. 4 ADVERSARIAL ATTACKS AND DEFENSES Background and set-up We now analyze how strategies for improving adversarial and noise shift robustness affect the decision boundary’s heat diffusion properties. In particular, we keep track of Brownian hitting probabilities ψ and the isocapacitory saturation τ . On one hand, we can view ψ as a capacitory robustness metric against continuous interpolation attacks given by Brownian noise (see also Section 1). On the other hand, Subsection 3.4 indicates how the behaviour of τ reveals deviation from the case of a flat or "spiky" and curvy decision boundary. Our empirical analysis uses the well-known CIFAR10 and MNIST datasets (details, preprocessing and enhancements are given in Appendix, Subsection C.5). For CIFAR10, we used the Wide-ResNet-28-10 (Zagoruyko & Komodakis (2016); Ford et al. (2019)) and ResNets with 32, 44 and 56 layers (He et al. (2016)). For MNIST, we selected a LeNet-5 and additional CNN architectures. Motivated by previous work (e.g. Ford et al. (2019)), we perform 3 types of training: ordinary stochastic gradient descent (ADAM optimization), training with Gaussian noise data augmentation and training with adversarial defense strategies (FGSM and PGD methods, see also Appendix, Section C.4 for details and remarks on robustness). Detailed outline of the numerics behind Brownian motion sampling, isoperimetric/isocapacitory saturation and relative volume sampling are given in Appendix, Subsection C.3. Analysis of results Recent results (Ford et al. (2019); Schmidt et al. (2017)) have shown qualitative differences between the adversarially robust boundaries of MNIST and CIFAR-10, which also impact the experimental findings in this work. In short, a robust decision boundary is in the MNIST case less spiky in comparison to CIFAR. For more details we refer to Appendix, Subsection C.2. In Fig. 4 we collect the statistics of the WRN and LeNet models on CIFAR10 and MNIST, respectively. On one hand, we confirm previous results (Ford et al. (2019); Fawzi et al. (2016)) implying the "flattening-of-boundary" phenomenon: noisy and adversarial training appear to improve and saturate isoperimetric bounds. Furthermore, the ball B(x, r) realizing relative error volume µ of 1% is on average scaled up for adversarial and, especially, noisy training. On the other hand, an intriguing behaviour is observed for the decision boundary’s heat diffusion traits. The isocapacitory saturation τ does not appear to concentrate around the value corresponding to a flat hyperplane: defense training strategies, both FGSM and PGD-based, may not have a significant impact on the behaviour of τ by forcing it to converge to the case of a flat decision boundary (shown as horizontal red punctured line). Put differently, the chance that a continuous Brownian perturbation will find an adversarial example (scaled to the appropriate ball B(x, r)) will not be significantly altered on average (see Appendix, Subsection C.7 for a visual reference). However, it appears that noisy training consistently delivers lower values of τ - intuitively, this is expected as the decision boundary is adjusted in terms of adding Gaussian "blobs", thus naturally being rounder. Geometrically, the sensitivity of τ to small perturbations in almost flat surfaces (Subsection 3.2) indicates that locally around clean (unperturbed) data points an amount of curvature and more complex geometry are still retained. Of course, this amount is not as large as to violate saturation of isoperimetric bounds and robustness comparability results in the sense of Fawzi et al. (2016). For example, in the case of CIFAR10 a simple geometric model surface that has a similar τ -behaviour (as for the adversarial and noisy training) is given in (Appendix, Subsections A.4, A.5): considering a data point x, an almost flat decision boundary that is concavely bent w.r.t. x with approximate curvature of ≈ 1/(12.3r). These observations reveal finer properties concerning decision boundary flattening due to defense training: in particular, noisy training appears to flatten decision boundaries and slightly bend them concavely w.r.t. to the clean data points. Further results for ResNet models and CNN are provided in (Appendix, Subsection C.7). Spiky sets and control on τ In Fig. 4 large outlying values of τ are filtered out. However, values of τ larger than 10 can occupy up to 1.3% for ordinary training and 2.1%, 2.6% for adversarial, noisy training, respectively. It follows, that the geometry of high-dimensional decision boundaries does not admit too many high-curvature (see also Proposition 3.1) spiky regions of low volume and high heat emission (high surface area) in the sense of Subsections 3.2, 3.4. However, it appears that defense training can increase the number of such spiky regions: one might explain such behaviour by seeing defense training as a bundle of additional geometric conditions that sometimes are not able to agree and thus lead to a more degenerate (singular) geometry. Further, with respect to the initial analysis of Fig. 4, a natural question is whether one can control τ along with the isoperimetric saturation - ultimately, one hopes to design better decision boundaries (flatter, or appropriately curved Moosavi-Dezfooli et al. (2019)) eventually leading to more robustness. However, getting a tight control on τ could be a difficult task. It is, indeed, possible to obtain some basic grip on τ : we trained a LeNet-5 architecture on MNIST that exhibited significantly increased τ values and preserved isoperimetric saturation (statistics are shown as the rightmost boxplot in Fig. 4). Similar to many adversarial defenses, the training consisted in augmenting the dataset with attacks given in this case by Brownian paths. However, it seems difficult to force τ to concentrate around the flat-case value, as well as to obtain competitive robustness of the model. On one hand, this is explained via the need to control heat diffusion through Brownian motion - the mentioned naive method is not able to capture the hitting properties sufficiently well; on the other hand, as discussed above heat diffusion properties can be far more sensitive than isoperimetric saturation w.r.t. minor geometric perturbations. 5 GENERALIZATION BOUNDS IN TERMS OF HITTING PROBABILITIES Compression, noise stability and generalization Recent advances (Arora et al. (2018); Suzuki et al. (2018; 2020)) indicate that generalization can be related to compression and noise stability. The guiding strategy is: (1) a large DNN f that is stable against (layer-wise) noise injections admits an effective compression to a simpler model f̃ which is almost equivalent to f . Intuitively, the noise stability absorbs the defects introduced by compression; (2) concentration results imply generalization bounds for f̃ . Admittedly, the generalization estimate is obtained initially for the smaller model; however, it is also possible to "transfer" the bound to f (see the discussion at the end of this Section). In this context a simple observation is that Brownian motion and its hitting probabilities can be related, respectively, to noise injection and margins of classification: small hitting probability of the decision boundary should indicate "margin-safety" and allow to compress parameters of the model more aggressively. However, in contrast to injecting normal noise, Brownian motion, with stopping time given by boundary impacts, is more delicate and requires further analysis of the decision boundary. In the following we propose a theoretical framework that, we hope, will augment and produce further insights into the interplay between noise stability and generalization bounds. The statements are inspired by the results in Arora et al. (2018); Suzuki et al. (2020) and we follow the notation therein. First, we propose several options for goodness of approximation (compression) in the sense of heat diffusion (Appendix, Subsection B.1). We give the following definition: Definition 1. Given a positive real number η, a classifier g is said to be an η−compression of f if∣∣ψEg(y)(x, γ2)− ψEf (y)(x, γ2)∣∣ < η (7) for all points x in the training sample, labels y and real numbers γ. Now, as mentioned above we have the following generalization bounds for the compressed model: Proposition 5.1. Let us suppose that f is approximable by g in the sense of Definition 1. Here g ∈ A, where A is a family of classifiers Rn → R parametrized by q parameters assuming r discrete values. For a classifier h, let Ch(x, y, t) be the event that a Brownian path starting at x hits Eh(y) within time t. Then for t1 ≤ t2 ≤ T we have L0(g) ≤ P(x,y)∼D (Cgα(x, y, t1)) ≤ P(x,y)∼X (Cf (x, y, t2)) + η +O (√ q log r m ) (8) with probability at least 1−e−q log r and L0 denoting the expected loss over the true data distribution. Taking t2 → 0 in (8), one recovers the empirical loss L̂0(f) on the RHS. In other words, the generalization of the smaller model g is controlled by hitting probabilities of the initial model f and corrections related to family capacity. The next natural question is the construction of g. Inspired by Johnson-Lindenstrauss techniques (cf. also Arora et al. (2018)) we are able to recover the following statement (thorough details are given in Appendix, Subsections B.5, B.6): Proposition 5.2 (Informal). Considering a fully connected feed-forward neural network f where some flatness conditions on the layer decision boundaries are fulfilled, there exists an η-compression g in the sense of Def. 1 whose number of parameters is logarithmically smaller than f . Finally, having the generalization estimates on the smaller model g it is natural to attempt transferring those to the initial model f - in Suzuki et al. (2020) this is achieved via certain local Rademacher complexity and "peeling" techniques. However, we choose not to pursue these bounds in the present work and assume the perspective in Arora et al. (2018) that g, being almost equivalent to f , provides a reasonable indicator of generalization capabilities. ACKNOWLEDGMENTS We would like to thank our anonymous reviewers whose advice helped improve the quality of the presentation. We are indebted to Prof. Christian Bauckhage for his constant encouragement, support and fruitful discussions. We also sincerely thank Benjamin Wulff for maintaining the outstanding computation environment at Fraunhofer IAIS - his support and coffee conversations played an essential role for our empirical analysis. In part, this work was supported by the Competence Center for Machine Learning Rhine-Ruhr (ML2R) which is funded by the Federal Ministry of Education and Research of Germany (grant no. 01IS18038B). We gratefully acknowledge this support. A APPENDIX A: HITTING ESTIMATES, SATURATION AND CURVATURE A.1 BROWNIAN MOTION AND BESSEL PROCESSES In this Subsection we introduce some basic background on Brownian motion. Definition 2 (Brownian motion). A real-valued stochastic process {ω(t) : t ≥ 0} is called a one-dimensional Brownian motion started at x ∈ R if the following hold: • ω(0) = x, • the process has independent increments, that is, for 0 ≤ t1 ≤ · · · tm the increments ω(tj)− ω(tj−1) for j = 2, · · · ,m are independent random variables, • for t ≥ 0, h > 0, the increments ω(t+ h)− ω(t) are normally distributed with expectation zero and variance h, • almost surely, the function t 7→ ω(t) is continuous. The process {ω(t) : t ≥ 0} is called a standard Brownian motion if x = 0. Finally, if ω1, · · · , ωn are independent one-dimensional Brownian motions started at x1, · · · , xn then the stochastic process ω(t) = (ω1(t), · · · , ωn(t)) is called an n-dimensional Brownian motion started at x = (x1, · · · , xn). Remark A.1. The distribution of the standard 1-dimensional Brownian motion ω(t) is normal with mean 0 and variance t. It follows that the RMSD (root mean squared displacement) of the standard n-dimensional Brownian motion is √ nt. Sampling Brownian motion simulation is prescribed directly by Definition 2. Given a step size s, number of steps k we sample a Brownian path as ω̂(k) := k∑ i=0 sXi, Xi ∼ N(0, 1). (9) By Definition 2, Var[ω(t)] = t, hence the sampling ω̂ corresponds to running a Brownian motion for time t = ks2. (10) In particular, the mean displacement of ω̂ is s √ nk. In accordance with the main text, Subsection 3.1 and Fig. 2, whenever we need to sample Brownian motion contained within the ball B(x, r) for its lifespan [0, t], we will fix the number of steps k (usually, we set k = 400) and adjust the step size s accordingly, so that r = s √ nk. Estimating hitting probabilities A straightforward empirical way to estimate Brownian hitting probability Pω [∃t0 ∈ [0, t]|ω(t0) ∈ S] of a target set S is to evaluate the steps ω̂(i), i = 0, . . . , k and check whether ω̂(i0) ∈ S for some S. Of course, the precision of this computation depends on the number of sampled Brownian paths ω̂, as well as the step size s and number of steps k. Formal statements on convergence and numerical stability could be obtained, e.g. by means of concentration/Monte-Carlo results (e.g. Proposition B.12 below); however, in practice, in our experiments we mostly worked with the regime k ≈ 104 which seemed an acceptable choice in terms of numeric stability and performance. Explicit closed-form computation of hitting probabilities is a non-trivial task, though it is possible for some model cases (main text, Lemma 3.2). Dimension 1 is special, where we have the so-called "reflection principle", which says that P ( sup 0≤s≤t ω(s) ≥ d ) = 2P (ω(t) ≥ d) . (11) For a proof of this basic statement we refer to Mörters & Peres (2010). However, in higher dimensions, there is no straightforward analog of the reflection principle, and calculating hitting probabilities of spheres leads one to the deep theory of Bessel processes. Let us consider a Brownian particle ω(t) starting at the origin in Rn and look at the real-valued random variable ‖ω(t)‖ (in the literature, these are known as Bessel processes). We are interested in the probability of the particle hitting a sphere {x ∈ Rn : ‖x‖ = r} of radius r within time t. Curiously, it seems that there is no known closed formula for such a hitting probability. The only formula we know of is in the form of a convergent series involving zeros of the Bessel function of the first kind, and appears in Kent (1980). For the reader interested in Kent’s formula, we also refer to associated asymptotics of zeros of the Bessel function in Watson (1944). The following heuristic is implicit in many of our calculations and motivates several of our definitions: the probability P ( sup 0≤s≤t ‖ω(s)‖ ≥ r ) (12) of a Brownian particle hitting a sphere of radius r within time t is dependent only the ratio r2/t. As a consequence, given a small η > 0 and a constant c, one can choose the constant cn in t = cnr2 small enough (depending on η) such that P ( sup 0≤s≤cnr2 ‖ω(s)‖ ≥ cr ) < η. (13) Roughly what this means is the following: for a Brownian particle, the probability of hitting even a large and nearby object may be made arbitrarily small if the motion is not allowed to run sufficiently long. A.2 HEAT DIFFUSION AND BROWNIAN MOTION DUALITY Macroscopic vs microscopic There are roughly two broad viewpoints towards the understanding of diffusion: the “macroscopic” and the “microscopic”. Macroscopically, the mechanism of diffusion can be thought of as creating a flux in the direction from greater to lesser concentration. If u(x, t) measures the intensity of the quantity undergoing diffusion, and J the flux across the boundary of a region Ω, then in the simplest model one assumes that (up to a constant) J = −∇u. Further, we have the identity ∂t ∫ Ω u(x, t) dx = − ∫ ∂Ω ν.−∇u dS, (14) where ν is the outward pointing unit normal vector to ∂Ω. By applying the divergence theorem to (14), one immediately gets the heat equation ∂tu = ∆u. Here ∆ denotes the Laplace operator given by the sum of second derivatives: ∆ = ∑n i=1 ∂ 2 ii. Now, many real-life diffusion processes are the result of microscopic particles jittering around seemingly in a random manner. This motivates the microscopic viewpoint, i.e., the modelling of heat diffusion via Brownian motion of particles. We posit that a particle located at x ∈ Rn at time t0 will have the probability ψU (x, t) of being in an open set U ⊂ Rn at time t0 + t, where ψU (x, t) = ∫ U p(t, x, y) dy, (15) and p(t, x, y) is the fundamental solution of the heat equation, or more famously, the “heat kernel”. In other words, p(t, x, y) solves the heat equation{ (∂t −∆)u(x, t) = 0, u(x, 0) = δ(x− y), (16) with the Dirac delta distribution as the initial condition. Via Fourier transform, it is easy to establish that p(t, x, y) is given by p(t, x, y) = 1 (4πt)n/2 e− |x−y|2 4t . (17) This builds the bridge to pass between analytic statements on the side of the heat equation and probabilistic statements on the side of Brownian motion (see Grigor’Yan (2001), Taylor (2011)). The precise formulation of this duality is given by the celebrated Feynman-Kac theorem discussed in Subsection A.3 below. Heating up the decision boundary In our context we introduce the following heat diffusion process along the classifier’s decision boundary N : (∂t −∆)ψ(x, t) = 0, ψ(x, 0) = 0, ∀x ∈ Rn, ψ(x, t)|x∈N = 1, ∀t > 0. (18) In other words ψ(x, t) gives the heat quantity at the point x at time t given that at the initial moment t = 0 all points have a heat quantity 0 and afterwards a constant heat source of intensity 1 is applied only at the decision boundary N . As remarked above this is the macroscopic picture: the mentioned Feynman-Kac duality implies that ψ(x, t) is also the hitting probability Pω [∃t0 ∈ [0, t]|ω(t0) ∈ N ]. A.3 THE FEYNMAN-KAC THEOREM It is well-known that given a reasonable initial condition u(x, 0) = f(x), one can find an analytic solution to the heat equation via convolution with heat kernel, et∆f(x) := p(t, x, .) ∗ f(.). This just follows from (16) by convolving directly. Now, via the duality of diffusion explained above, one expects a parallel statement on the Brownian motion side, one which computes the contribution of all the heat transferred over all Brownian paths reaching a point at time t. It stands to reason that to accomplish this, one needs an integration theory defined over path spaces, which leads us to the theory of Wiener measures. We describe the main idea behind Wiener measure briefly: consider a particle undergoing a random motion in Rn (given by a continuous path ω : [0,∞) → Rn) in the following manner: given t2 > t1 and ω(t1) = x1, the probability density for the location of ω(t2) is p(t, x, x1) = 1 (4π(t2 − t1))n/2 e − |x−x1| 2 4(t2−t1) . We posit that the motion of a random path for t1 ≤ t ≤ t2 is supposed to be independent of its past history. Thus, given 0 < t1 < · · · < tk, and Borel sets Ej ⊆ Rn, the probability that a path starting at x = 0 at t = 0, lies in Ej at time tj is∫ E1 · · · ∫ Ek p(tk − tk−1, xk, xk−1) · · · p(t1, x1, 0) dxk · · · dx1. The aim is to construct a countably-additive measure on the space of continuous paths that will capture the above property. The above heuristic was first put on a rigorous footing by Norbert Wiener. Using the concept of Wiener measure, one gets the probabilistic (microscopic) description of heat diffusion, which is the content of the celebrated Feynman-Kac theorem: Proposition A.2. Let Ω ⊆ Rn be a domain, with or without boundary (it can be the full space Rn). In case of a boundary, we will work with the Laplacian with Dirichlet boundary conditions. Now, let f ∈ L2(Ω). Then for all x ∈ Ω, t > 0, we have that et∆f(x) = Ex (f (ω(t))φΩ(ω, t)) , (19) where ω(t) denotes an element of the probability space of Brownian paths starting at x, Ex is the expectation with regards to the Wiener measure on that probability space, and φΩ(ω, t) = { 1, if ω([0, t]) ⊂ Ω 0, otherwise. For a more detailed discussion, see Georgiev & Mukherjee (2018a). A.4 ISOPERIMETRIC AND ISOCAPACITORY RESULTS Isoperimetric bounds Isoperimetric inequalities relating the volume of a set to the surface area of its boundary have given rise to a wealth of results Burago & Zalgaller (1988). Given a set M with boundary ∂M , the basic pattern of isoperimetric inequalities is: Vol(M) ≤ c1 Area(∂M) n n−1 , (20) where c1 is an appropriate positive constant depending on the dimension n. In many cases, equality (or saturation in the sense of almost equality) in (20) is characterized by rather special geometry. For example, classical isoperimetric results answer the question, which planar set with a given circumference possesses the largest area, with the answer being the disk. As discussed in the main text, isoperimetric considerations have recently lead to significant insights about decision boundaries of classifiers subject to adversarial defense training mechanisms Ford et al. (2019) by revealing flattening phenomena and relations to robustness. Isocapacitory bounds As mentioned in the main text, one can prove types of isocapacitory bounds that resemble the isoperimetric ones: roughly speaking, these replace the area term with suitable Brownian hitting probabilities. We have the following result (cf. also Georgiev & Mukherjee (2018a)): Proposition A.3. Let B(x, r) ⊂ Rn, n ≥ 3, and let E ⊂ B(x, r) denote an “obstacle”, and consider a Brownian particle started from x. Then the relative volume of the obstacle is controlled by the hitting probability of the obstacle: Vol(E) Vol(B(x, r)) ≤ cn (ψE(x, t)) n n−2 . (21) Here, cn is a positive constant whose value is dependent only on n provided the ratio between r2 and t is suitably bounded. In particular, in the regime r2 = nt, we have that cn = ( Γ ( n 2 − 1 ) /Γ ( n 2 − 1, n 4 )) n n−2 . Here, Γ(s, x) represents the upper incomplete Gamma function Γ(s, x) := ∫ ∞ x e−tts−1 dt. Proof. Recall that the capacity (or more formally, the 2-capacity) of a set K ⊂ Rn defined as Cap(K) = inf η|K≡1,η∈C∞c (Rn) ∫ Rn |∇η|2. (22) From Section 2.2.3, Maz’ya (2011), we have the following “isocapacitory inequality”: Cap(E) ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n , (23) where ωn = 2π n/2 Γ(n2 ) is the (n− 1)-dimensional surface area of Sn−1. Now, we bring in the following estimate given by Theorem 3.7 of Grigor’Yan & Saloff-Coste (2002): ψE(x, t) ≥ Cap(E) ∫ t 0 inf y∈∂E p(s, x, y) ds. (24) Now, we have ψE(x, t) ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n ∫ t 0 1 (4πs) n/2 inf y∈∂E e− |x−y|2 4s ds ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n ∫ t 0 1 (4πs) n/2 e− r2 4s ds = ω2/nn n n−2 n (n− 2)|E| n−2 n 1 4rn−2πn/2 ∫ ∞ r2 4t e−zzn/2−2 dz. After rearrangement the proposed claim follows. Intuitively, it makes sense that if the volume of a set is fixed, one can increase its hitting probability by “hammering” the set into a large thin sheet. However, it seems unlikely that after lumping the set together (as in a ball), one can reduce capacity/hitting probability any further. Moreover, isocapacitory bounds are saturated by the n-ball. It is also illustrative to compare the seemingly allied concepts of capacity and surface area. A main difference of capacity with surface area is the interaction of capacity with hitting probabilities. As an illustrative example, think of a book which is open at an angle of 180◦, 90◦, 45◦ respectively. Clearly, all three have the same surface area, but the probability of a Brownian particle striking them goes from the highest to the lowest in the three cases respectively. It is rather difficult to make the heuristic precise in terms of capacity (at least from the definition). Capacity can be thought of as a soft measure of how "spread out" or "opened-up" a surface is, and is highly dependent on how the surface is embedded in the ambient space. Isocapacitory vs isoperimetric saturation A main line of analysis in the present work addresses the interplay between isocapacitory and isoperimetric saturation. In our particular context of defense training mechanisms we observe saturation of isoperimetric bounds for the classifier’s decision boundaries - this implies that decision boundaries are not far from being flat. However, as mentioned before, it turns out that isocapacitory saturation does not concentrate around the values corresponding to hyperplanes (overall, it seems to stay well below that value). In this sense, isocapacitory saturation acts as a finer sensitive measure of deviation from flatness. A simple model geometric scenario that provides similar behaviour is illustrated in Fig. 5 and Fig. 6. A.5 MODEL CASES We first begin with the proof of Lemma 3.2. Proof. Let us select an orthonormal basis {e1, . . . , en} so that e1 coincides with the given hyperplane’s normal vector. A standard fact about n-dimensional Brownian motion is that the projections on the coordinate axes are again one-dimensional Brownian motions Mörters & Peres (2010). Thus, projecting the n-dimensional Brownian motion onto e1 the hitting probability of the hyperplane is the same as the probability that one-dimensional Brownian motion ω(t) will pass a certain threshold d by time t. To compute this probability we use the reflection principle (11) in conjunction with Remark A.1. Consequently, the RHS is equal to 2Φ(−d/ √ t). The computation of µ(x, r) follows by definition. Here we note that the dimension n enters only in terms of the spherical cap volume. An impression how τ behaves for different choices of n in terms of the distance d is given in Fig. 7. In particular, one observes the well-known concentration of measure phenomenon and Levy’s lemma: the volume of the spherical cap exhibits a very rapid decay as n becomes large. Moreover, experiments reveal a curious phenomenon: there is a threshold distance d0 until which τ ≈ 2 and afterwards τ explodes. In Fig. 8 we plot further interesting model cases where the error set forms a wedge (the region between two intersecting hyperplanes) or a cone. Spiky sets As discussed in the main text, one observes a high isocapacitory saturation τ for the so-called "spiky" sets - these are sets of relatively small volume and relatively large/dense boundary. Theoretically, a guiding model case in this direction is given by Lemma 3.3 in the main text, whose proof we now record. Proof. Let Tρ denote the ρ- tubular neighborhood of a line segment of length h inside Rn. Clearly, Tρ ∼= B(0, ρ)× [0, h], where B(0, r) is a ρ-ball inside Rn−1. By the well-known process of Steiner symmetrization in Rn, it is clear that the expression for capacity in (22) will be minimized by a function that is “radially symmetric” around the central axis of the tube Tρ, that is f(x, y) = f(|x|), where x ∈ B(0, ρ), y ∈ [0, h]. Then, as we scale ρ→ λρ, where λ↘ 0, Cap (Tλρ) ∼ λn−3 Cap (Tρ) (which is seen directly from the definition (22)), whereas the volume scales as |Tλρ| = λn−1 |Tρ|. Now assume that the cylinder Tρ is inside the closed ball B(x, r) ⊂ Rn, the central axis of Tρ is pointing towards x, and Tρ is touching the boundary of B(x, r). To pass from capacity to hitting probability of the set Tρ, we use that Grigor’Yan & Saloff-Coste (2002): Cap(Tρ)r 2 Vol(B(x, r)) e−C r2 t ≤ ψTρ(x, t). (25) Finally, using the definition of τ and putting the above estimates together, one sees that in the time regime of O(r2), τ scales like λ−2/(n−2), and hence, τ ↗∞ as λ↘ 0. See also Figure 8 for a visual discussion of the isocapacitory saturation for the model cases of wedges and cones. A.6 CURVATURE ESTIMATES IN TERMS OF ISOCAPACITORY SATURATION The geometric concept of curvature has a rich history and plays a central role in differential geometry and geometric analysis. There are several notions of curvature in the literature, ranging from intrinsic notions like sectional, Ricci or scalar curvatures to extrinsic (that is, dependent on the embedding) notions like principal curvatures and mean curvature, which are encoded in the second fundamental form. In this note we use a somewhat “soft” definition of curvature, following previous work Fawzi et al. (2016); Dezfooli et al. (2018). Suppose the decision boundary Nf is sufficiently regular (C2 is enough for our purpose) and it separates Rn into two components R1 := {f > 0} and R2 := {f < 0}, corresponding to a binary classification (the construction in the multi-label case is analogous). For a given p ∈ Nf , let rj(p) denote the radius of the largest sphere that is tangent to Nf at p, and fully contained inRj . Then, one defines the curvature κ at p as κ(p) = 1/min (r1(p), r2(p)) . (26) See Fig. 10 for a geometric illustration. However, it turns out that most notions of curvature are quite subtle (see Fawzi et al. (2016)) and at this point, seemingly more cumbersome and intractable to handle experimentally. We will take an indirect approach, and attempt to read off the effect of and on curvature via the isocapacitory saturation τ . Again, we begin with the model cases: we first study the behaviour of curvature κ if τ achieves its least possible value. We start by fixing some notation. As before let us consider a ballB(x, r) with an error set E ⊂ B(x, r) and boundary N = ∂E (clearly our main case of interest is E = E(y) ∩B(x, r)). Let us denote the the distance d = d(x,N ) and suppose the point y ∈ N realizes this distance, i.e. d(x, y) = d. To rule out some degenerate cases and ease the analysis we introduce the following assumption: Assumption: The hypersurface N and the point x are on different sides of the tangent hyperplane H∗ := TyN (cf. Fig. 11). This assumption is also technically important, as otherwise low values of τ will be produced by annuli surrounding x. With that in place, we have the following rigidity result: Proposition A.4. Let us fix the distance d = d(x,N ) and suppose the assumption above holds. Then the least possible value of τ is attained only if the curvature κ of the hypersurface N is 0. Proof. As above letH∗ be the tangent hyperplane at distance d from x, and let C denote the (smaller) spherical cap formed by H∗ ∩B(x, r). The proof relies on the following variational argument. If N is not the same as H∗, then N ⊆ C, with y ∈ N ∩H∗. We wish to argue then one can perturb N infinitesimally to decrease the value of τ , so the only minimizer of the above expression has to be H∗. The basic idea is to cut out a small piece pv around v and paste it in the region of around ṽ (Fig. 11). We say that N has positive curvature at some point z if the ball defining the curvature at z and the point x lie on different sides of N . The construction is as follows. Let S(x, s) be the (n− 1)-sphere centered at x with radius s. We consider two cases: Case I: Let us suppose that there exist s1 < s2 ≤ r and points v, ṽ ∈ N such that the curvature of N at v ∈ N ∩ S(x, s1) is greater than the curvature at ṽ ∈ N ∩ S(x, s2). Let us, moreover, choose the infimum among such s1 and the supremum among such s2. To define the mentioned piece pv , we consider two small balls B(v, ε), B(ṽ, ε) (where ε s2 − s1), and cut out a set pv = E ∩ B(v, ε) such that ∂ (E \B(v, ε)) is congruent to N ∩ B(ṽ, ε) (this is possible due to the curvature assumptions at v, ṽ). Then, we define the new error set E′ = E∪pṽ \pv and the boundaryN ′ = ∂E′, where pṽ represents the image of pv under the rigid motion and attached inside B(ṽ, ε) (see Fig. 11). It is now clear that |E| = |E′|, but ψE′(x, T ) < ψE(x, T ) for all T > 0. The last inequality follows from the evaluation of the explicit heat kernel that defines hitting probability ψ as stated by Feynman-Kac duality: ψE(x, T ) = ∫ T 0 ∫ E 1 (4πt)n/2 e− (x−y)2 4t dy dt > ∫ T 0 ∫ E′ 1 (4πt)n/2 e− (x−y)2 4t dy dt = ψE′(x, T ). It follows from the definition of τ that τE ≥ τE′ . Case II: If Case I is not satisfied, then, similarly, we choose two points v, ṽ, but instead of defining the piece pv by intersection with a small ball around v we select pv as a “concavo-convex lens shape” domain, where the curvature on the concave “inner side” of pv of the lens is greater than that on the convex outer side. As before, we attach a rigid motion image of pv inside B(ṽ, ε). The rest of the argument is similar to Case I. With reference to our previous discussion of spikes, it heuristically makes sense that a spike must have reasonably high curvature (it can have high curvature on the average, or if it is flat at most places, then have a sharp needle like end where the curvature is very high). In the same setting as Proposition A.4 let us, moreover, for simplicity assume that N is the graph of a function over the tangent hyperplane H∗ (Fig. 11). Proposition A.5. In the above setting let us fix the value of d. Then, if the maximum curvature κmax of N is sufficiently high (greater than some universal constant), then it satisfies κmax ≥ τ 1 n r ( Φ ( − d√ t ))− 1n−2 , (27) where Φ denotes the c.d.f. of the standard normal distribution. If a point attaining this maximum curvature is within the half concentric ball B(x, r/2), then κmax satisfies the stronger estimate κmax ≥ τ 1 n (r − d) r n n−1 ( Φ ( − d√ t ))− n (n−1)(n−2) . (28) Proof. Recalling the definition of the isocapacitory saturation τ , we will bound the numerator (resp. denominator) of τ from above (resp. below). First, for the numerator ψE(x, t) we will use a basic monotonicity property of hitting probabilities stating that for two sets A ⊆ B one has ψA(x, t) ≤ ψB(x, t) - this follows directly from the definition of ψ. Now, since E ⊆ C where C is the smaller spherical cap of B(x, r) ∩H∗, we have ψE(x, t) ≤ ψC(x, t). However, recalling the explicit form of ψC from Lemma 3.2 of the main text, we have ψE(x, t) ≤ Φ ( − d√ t ) . Second, to bound the denominator of τ (i.e. Vol(E)), we observe that if κmax is large enough, by definition E contains a ball of radius 1κmax , and Vol(E) ≥ ωn κnmax where ωn denotes the volume of unit n-dimensional ball. That finally implies, τ ≤ ( Φ ( − d√ t )) n n−2 Vol(B(x, r)) Vol(E) ≤ ( Φ ( − d√ t )) n n−2 rnκnmax, which proves (27). If a point of maximum curvature is inside a concentric ball of radius r/2, thenE contains≈ κmax(r−d)2 balls of radius 1κmax , which implies that Vol(E) ≥ κmax(r − d) ( ωn κnmax ) . The rest of the proof is similar. Now, we give a curvature estimate which works in any regime, without any restrictions. The tradeoff is a global average bound of the Lp-type rather than pointwise estimates. Proposition A.6. In the setting as above, let us fix the distance d = d(x,N ). At each point of N , let us denote by κ the maximal sectional curvature of N at that point. The following estimate holds: ‖K‖L1 ≥ Vn(d, r)− 2ωnr nΦ ( − d√ t ) τH , (29) where Vn(d, r) denotes the volume of the smaller spherical cap at distance d, the constant ωn denotes the volume of unit ball in Rn, and the function K is an integral function of the curvature κ over lines (defined in (31) below). Proof. Again, we suitably bound the numerator and denominator of τ . Starting with the numerator, as explained in Proposition A.5, we have by monotonicity ψE(x, t) ≤ 2Φ ( − d√ t ) . (30) To bound the denominator of τ we proceed as follows. Let N be the graph of the function g̃(x1, · · · , xn−1), where the variables xj are taken from the hyperplane H∗ (Fig. 11) at distance d from x; the point at which N touches this hyperplane is taken as the origin. Let ϕ be a smooth cut-off function defined on the hyperplane such that ϕ ≡ 1 on the set S of all (x1, · · · , xn−1) such that g̃(x1, · · · , xn−1) ∈ B(x, r), and ϕ ≡ 0 outside the -tubular neighborhood of S. Finally, let g := ϕ g̃. Now we see that, letting a = (r2 − d2)1/2, Vn(d, r)−Vol(E) ≤ ∫ a ρ=0 ∫ Sn−2 g (ρ, θ) ρ n−2 dρ dθ. Now, if η denotes the unit vector in the direction of a fixed (ρ, θ), observing that g (0) = 0, we have by the fundamental theorem of calculus g (ρ, θ) = ∫ 1 0 ∂tg (tρη, θ) dt. In turn, applying the fundamental theorem a second time and observing that ∇g (0) = 0, we have that g (ρ, θ) = ∫ 1 0 ∫ 1 0 ∂s∂tg (stρη, θ) ds dt. Putting everything together we get, Vn(d, r)−Vol(E) ≤ ∫ a ρ=0 ∫ Sn−2 (∫ 1 0 ∫ 1 0 ∂s∂tg (stρη, θ) ds dt ) ρn−2 dρ dθ. Now, we define the following integral quantity: K (ρ, θ) = ∫ 1 0 ∫ 1 0 |κ (stρη, θ)| ds dt. (31) Noting that the maximum sectional curvature bounds the second derivatives, finally we have that Vn(d, r)−Vol(E) ≤ ‖K ‖L1 . (32) To obtain (29) we now put all the above estimates together and let ↘ 0. B APPENDIX B: GENERALIZATION BOUNDS AND COMPRESSION SCHEMES Background A main line of ML and statistical inference research addresses questions of generalization. To set the stage we start with some notation. Let us suppose that the dataset X is sampled from a probability distribution D, i.e. (x, y) ∼ D. Following conventions from the literature Arora et al. (2018) we define the expected margin loss of a classifier f by Lγ(f) := P(x,y)∼D [ f(x)[y] ≤ γ + max j=1,...,k;j 6=y f(x)[j] ] . (33) We use the notation L̂γ to denote the expected empirical margin loss over the given data set X . Finally, the generalization error is defined as Lγ − L̂γ . Quite roughly speaking, standard generalization results attempt to estimate the performance of the classifier on unseen samples (i.e. the full data distribution), thus yielding bounds of the form: Lγ1(f) ≤ L̂γ2(f) + F (γ1, γ2, f,X ), (34) where F is an additional term that usually depends, e.g. on the size of X , the expressiveness of f and further margin information (γ1, γ2). B.1 COMPRESSION IN A HEAT DIFFUSION SENSE IMPLIES GENERALIZATION BOUNDS We first state a well-known concentration inequality due to Hoeffding which will find repeated use in the ensuing sections: Proposition B.1 (Hoeffding’s inequality). Let X1, . . . , Xn be independent random variables taking values in the interval [0, 1], and let X = 1n (X1 + · · ·+Xn) be the empirical mean of these random variables. Then we have: P ( X − E ( X ) ≥ t ) ≤ e−2nt 2 . (35) We now provide the proof of Proposition 5.1 of the main text. Proof. The strategy of proof follows well-known "weak-law-of-large-numbers" concentration techniques in a spirit similar to Arora et al. (2018). Step 1. First, we show that for a given g as |X | → ∞, P(x,y)∼X (Cg(x, y, t1))→ P(x,y)∼D (Cg(x, y, t1)) , (36) where Cg(x, y, γ2) is the event that a Brownian path starting at x hits Eg(y) within time γ2. The rate of convergence is determined through Chernoff concentration bounds. Choose α ∈ A, and let gα be the corresponding classifier. Attached to each sample point xj , there is a Bernoulli random variable Xj which takes the value 1 if Cgα(xj , y, γ 2) happens, and 0 otherwise. Then, the average X = 1m ∑m j=1Xj is given by the average of m i.i.d. Bernoulli random variables each of whose expectations is given by P(x,y)∼D Cgα(x, y, γ2). Furthermore, we note that if a data sample is misclassified, then the Brownian particle almost surely will hit the error set. Combining this observation with the concentration estimate (35) above, we obtain L0(gα) ≤ P(x,y)∼D ( Cgα(x, y, γ 2) ) ≤ P(x,y)∼X ( Cgα(x, y, γ 2) ) + ξ, (37) with probability at least 1− e−2ξ2m. If each classifier gα has q parameters, each of which can take r discrete values, we take ξ = √ q log r m . Step 2. The estimate from the previous step should hold for every classifier gα in the family A with large probability. This is guaranteed by a union bound and tuning the Chernoff bounds from the convergence rate. More precisely, there are rq different choices α ∈ A, and hence by taking the union of the estimate in (37), one can say that P(x,y)∼D ( Cgα(x, y, γ 2) ) ≤ P(x,y)∼X ( Cgα(x, y, γ 2) ) + √ q log r m (38) with probability at least 1− e−q log r over all α ∈ A. Step 3. Finally one uses the fact that f is approximable by at least one g = gα0 for some α0 in A. Via Definition 1 of the main text, one sees that P(x,y)∼X ( Cgα0 (x, y, γ 2) ) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) + η, which finally gives that with probability at least 1− e−q log r, we have L0(g) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) + η +O (√ q log r m ) . (39) Remark B.2. As noted, a classifier f classifies a point x wrongly if and only if ψE(y)(x, t) = 1 for all time scales t. With this observation, and since (39) works for all real numbers γ, letting γ → 0, we have that with probability at least 1− e−q log r, L0(g) ≤ L̂0(f) + η +O (√ q log r m ) . This recovers a loss estimate which is similar to the estimate in Theorem 2.1 of [1]. Indeed, one can consider P(x,y)∼X ( Cf (x, y, γ 2 ) as a “soft” or probabilistic measure of classification with margin ≈ γ. When defining the notion of a compression, instead of taking a pointwise difference as in Definition 1 of Arora et al. (2018), we would like to capture the idea that the decision boundary of a good compression should be “close enough” to the decision boundary of the original classifier. In our context, this implies that their “heat signatures” at the sample points should be close enough at all time scales. As noted in the main text, Definition 1 is definitely one natural option to define goodness of compression in a heat-diffusion sense. Another natural way is to consider the Brownian motion’s running time and define a good approximation as follows: Definition 3. Given a positive real number η, a classifier g is said to be an η−compression w.r.t. hitting time of f if ψEg(y)(x, γ 2 − η) ≤ ψEf (y)(x, γ 2) ≤ ψEg(y)(x, γ 2 + η) (40) for all points x in the training sample, labels y and real numbers γ2 ≥ η. Analogously, we have the following Proposition B.3. Let us suppose that f is approximable by g in the sense of Definition 3. Here g ∈ A, where A is a family of classifiers Rn → R parametrized by q parameters assuming r discrete values. As before, for a classifier h, let Ch(x, y, t) be the event that a Brownian path starting at x hits Eh(y) within time t. Then we have L0(g) ≤ P(x,y)∼D ( Cgα(x, y, γ 2 − η) ) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) +O (√ q log r m ) (41) with probability at least 1− e−q log r. The proof proceeds similarly as above. Letting γ2 → η gives us L0(g) ≤ P(x,y)∼X (Cf (x, y, η)) +O (√ q log r m ) . (42) Again, the first term on the RHS can be interpreted as the geometric margin of classification. In particular, if the classifier f separates points by a distance of≈ √nη, then since the Brownian motion travels ≈ √nη hitting the error set will happen only if a misclassification occurred, i.e. we have P(x,y)∼X (Cf (x, y, η)) ≈ L0(f). (43) B.2 A SHARP VARIANT OF THE JOHNSON-LINDENSTRAUSS ALGORITHM Several state-of-art compression schemes utilize a dimensionality reduction in the spirit of JohnsonLindenstrauss (JL), Arora et al. (2018). In this Subsection we discuss a JL compression scheme that will later be coupled with and tuned by some heat-diffusion estimates. We begin by discussing a variant of JL (Alg. 1). Data: Original matrix A of dimension h1 × h2, β ∈ (0, 1). Result: Stochastic compressed matrix  with O ( log(h1h2)/βα 2 ) non-zero entries such that P [ ‖Âx−Ax‖ ≥ α‖A‖F ‖x‖ ] ≤ β. Start with matrix A, real number α; while i ≤ h1, j ≤ h2 do Let zij = 1 with probability pij = 2a2ij βα2‖A‖2F , 0 otherwise; Let âij = zijaij pij . end Return  = (âij). Algorithm 1: Compressing a matrix A ∈ Rh1×h2 Proposition B.4. Let A be a matrix of dimension h1 × h2. Then, one can find a compressed matrix  such that ‖Ax− Âx‖ ≤ α‖A‖F ‖x‖, with probability at least 1− β, where the number of parameters of  is O ( log(h1h2)/βα 2 ) . A proof of Proposition B.4 in the spirit of classical JL can be provided - however, here we introduce a Bernoulli scheme which is a minor modification of Algorithm 2 of Arora et al. (2018). Proof. Define the random variables zij which take the value 1 with probability pij = 2a2ij βα2‖A‖2F , and the value 0 otherwise. Define âij = zijaij pij . One can now calculate that E (âij) = aij , and Var (âij) ≤ βα2‖A‖2F . Using the above, one can further calculate that E(Âx) = Ax, and Var(Âx) ≤ ‖x‖2‖A‖2Fβα2. By Chebyshev’s inequality, this gives us that P [ ‖Âx−Ax‖ ≥ α‖A‖F ‖x‖ ] ≤ β. Now, the expected number of non-zero entries in  is ∑ i,j pij = 2 βα2 . An application of Chernoff bounds now gives that with high probability the number of non-zero entries is O ( log(h1h2)/βα 2 ) . B.3 HITTING PROBABILITY, CAPACITY SENSITIVITY AND COMPRESSION As discussed in the main text, here we use hitting probabilities associated to the decision boundary to define a concept “capacity sensitivity” of a neural net layer. The heuristic is, the less the capacity sensitivity of a layer, the greater the facility in compressing the layer to one with fewer parameters. This goes in the spirit of current state-of-art results on compression and generalization bounds (Arora et al. (2018), Suzuki et al. (2018), Suzuki et al. (2020)). In particular, in Arora et al. (2018) the authors provide the notions of noise sensitivity and noise cushions motivated by Gaussian noise injections. Our first proposed definition for "heat-diffusion noise cushions" and capacity sensitivity goes as follows: Definition 4. Let η ∼ N be distributed along a noise distribution N concentrated in ball ‖η‖ ≤ η0. We define the capacity sensitivity S(x,Ai; t) of a layer Ai at the point x as S(x,Ai; t) := Eη∼N ∣∣ψEf (φ(Ai(x+ ‖x‖η)), t)− ψEf (φ(Aix), t)∣∣∣∣ψEf (φ(Aix), t)∣∣ . (44) We denote the maximum and expected sensitivity respectively as Sm(Ai; t
1. What are the main contributions and strengths of the paper regarding decision boundary geometry and adversarial robustness? 2. How does the paper connect decision boundary geometry, adversarial defenses, and generalization in deep neural networks? 3. What are some potential weaknesses or areas for improvement in the paper, particularly regarding clarity and conciseness? 4. How might the results of the paper relate to and complement those in the literature/references, specifically in terms of Brownian motion and heat diffusion classifiers? 5. What is the significance and interpretation of Lemmas 2.2 and 2.3, and how do they support the arguments later? 6. How does the paper handle ties in taking the argmax, and by extension, when defining E(y)? 7. What is the link between the generalization of f and g in Section 4, and how does Def 1 control L(f) in terms of hitting times? 8. Why did the authors suggest Def 1 over other definitions, and what does it mean intuitively? 9. How might the paper make a connection to heat diffusion classifiers, such as Szlam's work, to provide global insight into their decision boundaries? 10. Are there any additional suggestions or ideas that could enhance the impact of the paper?
Review
Review The contributions of the paper center on i) the introduction of diffusion-related tools for studying classifier decision boundaries; and ii) using those tools to connect decision boundary geometry, adversarial robustness, and generalization. The paper provides an analysis that provides insight into how curvature and local decision boundary geometry is influenced by adversarial defences, suggests a method for checking adversarial robustness, and then makes statements about model generalization based on geometric properties revealed by monte-carlo simulation of diffusions. Strengths: The paper brings together beautiful areas of mathematics, probability, and random processes in an effort to characterize decision boundaries of DNNs and the behavior of DNNs on unseen examples, adversarial or otherwise. Curvature, heat diffusions, diffusion geometry, compression play a role. Feynman-Kac duality is leveraged to pass from the intractable analytical methods in the literature to stochastic simulations that can be undertaken by practitioners with data. The paper appears to make some novel links between generalization and decision boundary diffusion geometry, offers an apparently novel analysis of the impact of common adversarial defenses to Brownian adversaries, and at a minimum offers some new insights into how we might think about, and interpret, complex decision boundaries learned by neural nets or other nonlinear classifiers. The paper also follows up with experiments in the context of real applications and complex models. A series of Appendices provide technical details and experimental protocols. Weaknesses: The breadth of topics, the range of tools, steps, and quantities seems to have left the authors without enough space to get it all across. While the Appendices provide valuable details, additional definitions and discussion, the main body of the paper lacks clarity, context, and sufficient organization to allow an average reader to follow, or even appreciate, some of the arguments and key contributions. This is potentially the main weakness of the paper. If this were a longer journal submission, it might even make sense to separate the work into two papers: one exploring adversarial learning, and another exploring generalization. The introduction and motivation sections of the paper could be improved by explicitly stating at a high level what the paper is contributing, and what insights will come out of the analyses. For example, the abstract states: “This leads to new insights concerning the "flattening-of-boundary" phenomenon.” What insights are in store, specifically? Imagine the paper were to be selected for a popular pod-cast, or a 5-minute lightning oral presentation at a conference: how would you distill it down to the essential contributions, components, and logical steps required to arrive at the key results? Another concern is that while adversarial defense training by studying hitting times is valuable, it feels slightly misplaced/incomplete because an adversary might (often?) follow geodesics/geometry to very quickly obtain an error sample in cases where diffusion distances are high (e.g. a dumbbell). So while it’s helpful to be able to say that a random walk has a low probability of becoming an error sample from a point x, it doesn’t necessarily give us a guarantee about a “determined” adversary, who isn’t constrained at all to follow random walks (“Brownian attacks”). Recommendation: Overall I recommend a borderline reject. The paper has some interesting ideas, and probably novel contributions, but needs revising for clarity and conciseness to be digestible or impactful. The paper needs to also be more specific about how the results relate to, leverage, and complement those in the literature/references. For example, the paper says in a few places that conclusions “agree with” reference [X]. Does this mean they have just rederived the same result, offering no additional insight? What is the significance of the agreement, and what does it provide to the community as a takeaway? It would be helpful to call out very concretely what is being contributed, and how it relates/differs relative to the literature. Other suggestions: Perhaps clarify early on in the paper where the Brownian motion is happening. A reader might wonder: Is it in the ambient space? Is it on a graph? Is it on a manifold parameterized by some kind of intrinsic (or local) coordinates extracted from the model? Discuss the significance and interpretation of Lemmas 2.2 and 2.3. Why are they introduced? What are they saying intuitively, and how are they going to support your arguments later? There’s very little text around them (maybe due to a space problem and some ruthless trimming to make it all fit!) For the sake of completeness, define explicitly how you intend to handle ties in taking the argmax, and by extension, when defining E(y) (i.e. so that N is a subset of E(y)). Section 4 seems to have lost sight of the overall desired result, by omitting how L(g) controls L(f). Apologies if I’ve missed something obvious, but it seems like the strategy to control generalization of f in terms of hitting times is to pass to the compressed version, and control that. So where’s the link between generalization of f and generalization of g? Does Def 1 say it? Eq (7) -- \phi_{E} should be introduced to make the definition self contained. Discuss why Def 1 is reasonable and what it means intuitively. The paper says it’s an “initial suggestion”. Why did you suggest this particular defn, over others? pg. 4 “isocapacitory” results: the analogy abruptly jumps from heat diffusion to “charge” accumulation. Try to link the two and provide a transition (without the reader having to refer to the lengthy appendices). It could be interesting to make a connection to heat diffusion classifiers (e.g. Szlam, Regularization on Graphs with Function-adapted Diffusion Processes, JMLR 2008), in which heat is instead diffused outward from the labels to classify new points, giving a potentially interesting characterization of the decision boundary (by construction). Maybe such methods can locally approximate more complex learning algorithms thereby providing global insight into their decision boundaries? UPDATE TO REVIEW FOLLOWING AUTHOR REVISIONS AND COMMENTS I thank (and commend) the authors for their detailed, point-by-point responses. The authors have made a good effort in their revisions to improve and clarify the exposition, and rectify the other comments made by the reviewers. The paper could still benefit from a deeper rewrite -- there's just so much that can be packed into a conference paper with limited real estate, and the authors are seeking to make several contributions under the umbrella of one submission (as the title suggests). So clarity suffers, and impact will suffer as a result. But, in my mind that shouldn't necessarily be a show stopper at this stage, in light of the revisions. I am therefore upgrading my recommendation.
ICLR
Title Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds Abstract In the present work we study classifiers’ decision boundaries via Brownian motion processes in ambient data space and associated probabilistic techniques. Intuitively, our ideas correspond to placing a heat source at the decision boundary and observing how effectively the sample points warm up. We are largely motivated by the search for a soft measure that sheds further light on the decision boundary’s geometry. En route, we bridge aspects of potential theory and geometric analysis (Maz’ya (2011); Grigor’Yan & Saloff-Coste (2002)) with active fields of ML research such as adversarial examples and generalization bounds. First, we focus on the geometric behavior of decision boundaries in the light of adversarial attack/defense mechanisms. Experimentally, we observe a certain capacitory trend over different adversarial defense strategies: decision boundaries locally become flatter as measured by isoperimetric inequalities (Ford et al. (2019)); however, our more sensitive heat-diffusion metrics extend this analysis and further reveal that some non-trivial geometry invisible to plain distance-based methods is still preserved. Intuitively, we provide evidence that the decision boundaries nevertheless retain many persistent "wiggly and fuzzy" regions on a finer scale. Second, we show how Brownian hitting probabilities translate to soft generalization bounds which are in turn connected to compression and noise stability (Arora et al. (2018)), and these bounds are significantly stronger if the decision boundary has controlled geometric features. 1 INTRODUCTION AND BACKGROUND The endeavor to understand certain geometric aspects of decision problems has lead to intense research in statistical learning. These range from the study of data manifolds, through landscapes of loss functions to the delicate analysis of a classifier’s decision boundary. In the present work we focus on the latter. So far, a wealth of studies has analyzed the geometry of decision boundaries of deep neural networks (DNN), reaching profound implications in the fields of adversarial machine learning (adversarial examples), robustness, margin analysis and generalization. Inspired by recent isoperimetric results and curvature estimates (Ford et al. (2019); Moosavi-Dezfooli et al. (2019); Fawzi et al. (2016)), we attempt to provide some new aspects of decision boundary analysis by introducing and studying a corresponding diffusion-inspired approach. In this note the guiding idea is to place a heat source at the classifier’s decision boundary and estimate its size/shape in terms of the amount of heat the boundary is able to emit within a given time (Fig. 1). The goal is to extract geometric information from the behavior of heat transmission. This technique of heat content seems well-known within capacity/potential theory and has led to a variety of results in spectral analysis relating heat diffusion and geometry, Jorgenson & Lang (2001); Grigor’Yan & Saloff-Coste (2002); Maz’ya (2011). However, working with such heat diffusion directly in terms of the corresponding differential equations is impractical. To this end, we note that, due to Feynman-Kac duality, the heat estimates are convertible to Brownian motion hitting probabilities. Thus we circumvent the need for solving intractable differential equations and instead are able to employ a straightforward Monte-Carlo sampling scheme in the ambient data space (Section 3). Background on defense training We apply the above analysis in the context of adversarial machine learning (Section 4) where one studies the interaction between an adversary and a ML system. One of the goals of the subject is to design attack/defense training strategies improving the robustness of a given ML model - in the present work we are interested in how adversarial/noise defense training are reflected geometrically. Many different metrics to estimate robustness have been proposed: on one hand, there is adversarial robustness (the probability that error samples lie very near a given data point x); on the other hand, there is corruption robustness (the probability of getting an error sample after perturbing a given data point x with some specified noise). In our context, heat diffusion naturally suggests a capacitory robustness metric: this metric is built upon the probability that Brownian motion started at a given data point x will hit error samples within a given time window. One can perceive this metric as a combination of adversarial and noise robustness (Brownian motion has continuous paths and specified stopping time determined by boundary impact). In this perspective, our work is aligned with studies of other robustness metrics and curvature results (cf. Fawzi et al. (2016) for a "semi-random" projection robustness and relations to curvature). We study the capacitory metric on the well-known CIFAR10 and MNIST datasets and observe that defense training techniques may either yield a certain (although not substantial) decrease (noise training) or fail to have a significant effect on continuous Brownian attacks overall. Surprisingly, in both cases the studied capacitory metric does not converge to the corresponding value as in the case of a flat decision boundary. Due to our comparison statements and curvature considerations, this means that locally around clean data points the geometry is in general flattened out but may still retain complexity and substantial areas of (small) non-vanishing curvature. In other words, from the point of view of our heat diffusion metrics, decision boundaries locally exhibit non-flat behaviour. Background on generalization estimates Finally, we observe that the collected heat/hittingprobability metrics can further be used to obtain generalization bounds where, in a nutshell, one evaluates the performance of a model on unseen data in terms of the performance over a given sampled data, the model’s expressiveness, dimension, etc. In this regard, we view decision boundary heat diffusion traits as an indicator of how noise-stable a given model is - this relates Brownian hitting bounds with recent compression-based generalization techniques in the spirit of Arora et al. (2018); Suzuki et al. (2018; 2020). More precisely, we proceed in two steps: first, we construct a "smaller" compressed model that is almost equivalent to the initial one in an appropriate heat-theoretic way; second, we obtain generalization estimates for the smaller model in terms of the decision boundary hitting probabilities (computed on the empirical dataset). Furthermore, the bounds are significantly improved under additional geometric assumptions on the decision boundary of the initial model. Additional related work The interplay between heat diffusion and geometry lies at the heart of many topics in geometric analysis and spectral theory (cf. Jorgenson & Lang (2001); Grigor’Yan (2001) for a far reaching overview). Some direct applications of heat diffusion techniques to zero sets of eigenfunctions are seen, for example, in Steinerberger (2014); Georgiev & Mukherjee (2018a;b). The literature on adversarial ML is vast: to name a few central works in the field, we refer to Dalvi et al. (2004); Biggio & Roli (2018); Szegedy et al. (2014). Much effort has been invested in designing and understanding strategies that will render a model robust to various attacks (e.g. Madry et al. (2018); Carlini & Wagner (2017)). In particular, the geometry of decision boundaries has been the focus of many works in the subject leading to breakthroughs in curvature estimates, boundary flatness and robustness, schemes for detecting boundary complexity, proposing adversarial attacks/defenses and diffusion based techniques towards constructing decision boundary from partially pre-labelled data (e.g. Ford et al. (2019); Fawzi et al. (2016; 2017; 2018); Dezfooli et al. (2018); Moosavi-Dezfooli et al. (2019); Karimi et al. (2019); Karimi & Tang (2020); He et al. (2018); Szlam et al. (2008)). The theory of generalization bounds has formed a classical main line of ML and statistical inference research (Vapnik (1999)). In this direction central questions address the generalization properties of heavily over-parametrized deep neural network models. According to some classical VC-dimension results such models should overfit the data and generalize poorly. Extensive research effort has been invested in developing appropriate sharper techniques to explain generalization of DNN models: on one hand there are the methods based on norm estimation whose bounds are not explicitly using the number of the network’s parameters (see Golowich et al. (2019); Neyshabur et al. (2015; 2018); Wei & Ma (2019); Bartlett et al. (2017), etc). On the other hand, recent results based on compression and VC-dimension can lead to sharper bounds (Arora et al. (2018); Suzuki et al. (2018; 2020)). 2 CONTRIBUTIONS, CONTEXT AND PAPER OUTLINE An outline of our essential contributions is given as follows: 1. We analyze decision boundary geometries in terms of novel heat diffusion and Brownian motion techniques with thorough theoretical estimates on curvature and flattening. 2. We show, both theoretically and empirically (in terms of adversarial scenarios on stateof-art DNN models), that the proposed heat diffusion metrics detect the curvature of the boundary; they complement, and in some respects are more sensitive in comparison to previous methods of boundary analysis - intuitively, our heat driven metrics are sharper on a finer scale and can detect small-scale "wiggles and pockets". As an application, we are thus able to provide evidence that adversarial defenses lead to overall flatter boundaries but, surprisingly, the heat traits do not converge to the corresponding flat-case, and hence, finer-scale non-linear characteristics (e.g. "wiggles and pockets") are persistent. 3. Moreover, the preservation of "wiggles and pockets" means that susceptibility to naive Brownian motion attacks is not significantly decreased via adversarial defense mechanisms. 4. Finally, we introduce a novel notion of compression based on heat diffusion and prove that stability of heat signature translates to compression properties and generalization capabilities. In terms of context, the present note is well-aligned with works such as Ford et al. (2019); Dezfooli et al. (2018); Fawzi et al. (2016; 2018). Among other aspects, these works provide substantial analysis of the interplay between geometry/curvature and adversarial robustness/defenses - in particular, we use some of the these tools (e.g. isoperimetric saturation) as benchmarks and sanity checks. However, in contrast, in our work we provide a non-equivalent technique to address decision boundary geometry for which we provide an extensive theoretical and empirical evaluation with insights on the preservation of finer-scale traits. Intuitively, previous distance-based geometric methods could be considered as a "coarser lens", whereas the present heat-diffusion tools appear to be much more sensitive. As a large-scale example, Brownian particles emanating from a point are able to distinguish between a decision boundary which is a hyperplane at distance d and a decision boundary which is a cylinder of radius d wrapping around the point. Our notion of compression is inspired by Arora et al. (2018), and establishes a connection between the Johnson-Lindenstrauss dimension reduction algorithm with diffusion techniques. Furthermore, we bridge the proposed heat-theoretic techniques with generalization bounds in the spirit of Arora et al. (2018); Suzuki et al. (2020). In particular, this shows that overall lower heat quantities at sample points imply better generalization traits. A step-wise road map of the present work is given below: • (Subsection 3.1) We start by discussing what heat diffusion is and how it is to be evaluated - here we discuss that, via Feynman-Kac duality, one can essentially work with Brownian motion hitting probabilities. • (Subsections 3.2 and 3.3) We introduce the isocapacitory saturation τ - a heat-theoretic metric that will be used to estimate boundary flatness. Moreover, here we emphasize the properties of τ such as relations to curvature (Proposition 3.1) and the novel information obtained from heat theoretic methods in comparison to previous distance-based ones. • (Subsection 3.4) We compute τ for certain geometric model cases such as hyperplanes, cones, wedges and "spiky" sets (Lemmas 3.2 and 3.3). This allows us later to evaluate how much a given geometry resembles these model cases. • (Section 4) Next, we are in a position to evaluate and compare τ for decision boundaries of DNNs. We experimentally illustrate the effect of adversarial defense mechanisms and noise robustness on τ (PGD/FGSM on MNIST and CIFAR-10). • (Section 5) We prove that heat transmission relates to generalization bounds (Propositions 5.1 and 5.2) - in particular, lower levels of heat at sample points yield sharper generalization bounds. Finally, we complete the discussion by informally stating our compression scheme. • (Appendix) Our methods leverage several tool sets extensively. For this reason our goal in the main text is to only collect and showcase the techniques and results. However, the thorough in-depth analysis is provided in the Appendix where the reader can find all relevant proofs and further background and references. 3 MOTIVATION AND MAIN IDEAS 3.1 GEOMETRY SEEN THROUGH BROWNIAN MOTION AND DIFFUSION Notation Let us consider a dataset X := {(xi, yi)}mi=1 consisting of feature points xi ∈ Rn and their corresponding labels y ∈ {1, . . . , k}. Let us suppose that a k-label classifier f : Rn → Rk labels a point x ∈ X as arg maxi∈[1,k] f(x)[i]. The decision boundary of f is given by N := {x ∈ Rn|f(x) has two or more equal coordinates} (cf. Fig. 2). Assuming f is sufficiently regular, one thinks of N as a collection of hypersurfaces in Rn. Further, for a given target label y we define the target (error) set E(y) as the set of points on which the classifier’s decision is different from y, i.e. E(y) := {x ∈ Rn| arg maxi∈[1,k] f(x)[i] 6= y} (here we remark that if arg max is set-valued at x with several coordinates obtaining the maximum value, then by convention x is contained in E(y)). Clearly, if a given data sample (x0, y0) ∈ X is correctly classified by f , then x0 is outside of the error set E(y0). Finally, we note that the boundary of E(y) coincides with E(y) ∩N and moreover, N is the union of the boundaries of E(y) for all labels y. Feynman-Kac duality and hitting probabilities As mentioned in Section 1 we wish to study a heat diffusion process where we place a heat source at the decision boundary N : formally, this is given by a heat equation with appropriate initial and boundary conditions (Appendix, Subsection A.2). Avoiding the impracticality of working with the differential equations directly, we bring forward the theorem of Feynman-Kac that relates the solution of the diffusion process to hitting probabilities of Brownian motion (Appendix, Subsection A.3). By way of notation, for an open set U ⊆ Rn, let ψU (x, t) denote the probability that a Brownian particle starting at the point x will enter U within time t. In other words, ψU (x, t) := Pω∼W [∃ t0 ∈ [0, t] | ω(t0) ∈ U ] , x ∈ X , (1) where ω denotes a Brownian motion defined over the interval [0, t] that follows the standard Euclidean Wiener distribution. The amount of heat that a point x receives from N within time t is comparable to the hitting probability that a Brownian particle starting at x will impact the boundary within time t (cf. Fig. 2). Provided that x is correctly classified this is equivalent to the probability of impacting the decision boundary. In general, we evaluate ψE(y)(x, t) (which we often denote by ψ(x, t) by minor abuse of notation) through direct sampling; however, in some model cases, e.g. E(y) being a half-space, a spherical shell or a conical set, ψ(x, t) has a concise closed form (Subsection 3.4 below) that can be evaluated analytically. This allows us to easily measure deviations and compare the heat imprint of N to particular model cases. Local analysis and set-up As mentioned above our analysis is local. For each clean data point x we consider a ball B(x, r) centered at x with radius r and perform all our computations there. In particular, a free Brownian motion starting at x and defined over a maximal time interval [0, t] will on average travel a distance of √ nt (Appendix, Subsection A.1). This suggests to couple r and the maximal Brownian running time t via r = √ nt (cf. Fig. 2), so that, if not stopped by boundary impact, Brownian motion will, on average, reach the sphere ∂B(x, r) by its maximal stopping time. 3.2 AN ISOPERIMETRIC AND ISOCAPACITORY PERSPECTIVE Isoperimetric results Isoperimetric estimates will be the starting baseline (Ford et al. (2019)) to detect low levels of curvature and boundary flatness. For some background in isoperimetric results we refer to (Appendix, Subsection A.4). Let us start by defining the relative error volume µ(x, r) := Vol(E(y) ∩B(x, r)) Vol(B(x, r)) . (2) We recall the so-called Gaussian isoperimetric inequality Borell (1975); Ford et al. (2019): d̃ ≤ −rΦ −1(µ)√ n , µ ≤ 1/2, (3) where Φ−1 denotes the inverse standard normal c.d.f. and where d̃ = d(x̃,Nf ) denotes the median distance with x̃ varying normally and concentrated in the ball B(x, r), and d̃ = 0 if µ ≥ 1/2. Here the isoperimetric result is rigid in the sense that equality in (3) occurs only if E(y) is a half-space. In Ford et al. (2019) the authors demonstrate that defense training mechanisms lead to decision boundaries that saturate this isoperimetric inequality, i.e. in this isoperimetric sense, the decision boundary N becomes locally closer to being a flat hyperplane. We define the ratio between the LHS and RHS in eq. (3) as the isoperimetric saturation. Isocapacitory results In our context of hitting probabilities (eq. (1)), results in potential theory allows us to prove isocapacitory bounds which are similar in spirit to isoperimetric bounds. More precisely one has: µ(x, r) ≤ cn ψ(x, t) n n−2 , (4) where cn is an appropriate constant depending on the dimension n, and r = √ nt. The proof relies on potential theory tools (capacity) and can be found in Appendix, Proposition A.3. Motivated by the above isoperimetric saturation results, one of our main goals is to study how µ compares to ψ(x, t). To this end we define the isocapacitory saturation τ as τ(x, r) := ψ(x, t) n n−2 µ(x, r) . (5) The basic guiding heuristic is that high values of τ indicate that E(y) has a very low volume in comparison to its boundary size and respective heat emission. This is the case whenever E(y) is a very thin region with a well-spread boundary of large surface area - e.g. a set that resembles thin spikes entering the ball B(x, r). In contrast, lower values of τ should indicate a saturation of the isocapacitory inequality (4) and imply that E(y) has a volume that is more comparable to its heat emission - e.g. thicker sets with tamer boundary. To quantify this intuition, we explicitly evaluate τ for some model scenarios (Subsection 3.4). 3.3 THE NOVEL INFORMATION GIVEN BY HEAT DIFFUSION Distances vs. hitting probabilities As discussed above, several works investigate decision boundaries in terms of distance-based analysis (Ford et al. (2019); Fawzi et al. (2016); Karimi & Tang (2020); Karimi et al. (2019)). We remark that our analysis based on hitting probabilities augments and extends the mentioned distance-based approaches. Although related, the two concepts are not equivalent. A guiding example is given by E(y) being a dense collection of "thin needles" (Appendix, Subsections A.4, A.5); in such a scenario the average distance to N is very small, as well as the chance a Brownian particle will hit N . On the other hand, if N is a dense collection of hyperplanes, the average distance toN is again small, but Brownian motions almost surely will hitN . In this sense, evaluating hitting probabilities yields a different perspective than is available from distance-based analysis and sheds further light on the size and shape of the decision boundary, particularly with regards to its capacity and curvature features. Isoperimetric vs. isocapacitory saturation Another demonstration of the additional information obtained through τ is given by almost flat shapes in higher dimensions that saturate isoperimetric bounds (Appendix, Subsection A.4). In these scenarios small geometric deformations can have a significant impact on τ , and at the same time almost preserve isoperimetric bounds. In other words τ provides an additional level of geometric sensitivity. We discuss this further in Section 4. The effect of curvature The interplay between curvature of the decision boundary and robustness has been well studied recently, e.g. Fawzi et al. (2016); Moosavi-Dezfooli et al. (2019) where various forms of robustness (adversarial, semi-random and their ratio) have been estimated in terms of the decision boundary’s curvature. Intuitively, the differential geometric notion of curvature measures how a certain shape is bent. The precise definition of curvature involves taking second-order derivatives which is in most cases impractical. However, in our context we show that the isocapacitory saturation τ implies certain curvature bounds. These statements exploit relations between curvature and volume and lead to pointwise and integral curvature bounds. As an illustration, we have: Proposition 3.1 (Informal). Let (x, y) ∈ X be a data sample. Then, provided that the distance d(x,N ) is kept fixed, larger values of τ locally imply larger pointwise/integral curvature values. A deeper analysis with formal statements and additional details are provided in Appendix, Subsection A.6. The advantages that curvature yields for some types of compression schemes and generalization bounds is also intensely investigated in Appendix, Section B. 3.4 MODEL DECISION BOUNDARIES: HYPERPLANES, WEDGES, CONES AND “SPIKY” SETS Given a certain geometric shape, one is often faced with questions as to how flat or spherical the given geometry is. To this end, a central technique in geometric analysis is comparing to certain model cases - e.g. a sphere, plane, saddle, etc. After having introduced τ and its basic traits we now evaluate it for several model cases (flat hyperplanes, wedges, cones, balls and "spiky" sets). Each of these model cases illustrates a distinguished τ -behaviour: from "tame" behaviour (hyperplanes, balls) to explosion (thin cylinders, "needles and spiky" sets). Hence, having comparisons to these model cases and given an decision boundary, one can, quantify how far away is the given surface from being one of the models. We start by discussing the flat linear case: Lemma 3.2. Let (x, y) be a data sample and suppose that E(y) forms a half-space at a distance d from the given data point x ∈ Rn. Then τ(x, r) = 2 Φ ( − d√ t ) Vol (B(x, r)) Vn(d, r) , (6) where Φ(s) is the c.d.f. for the standard normal distribution, and Vn(d, r) is the volume of the smaller n-dimensional solid spherical cap cut-off at distance d from the center of a ball of radius r. The computation uses standard reflection principle techniques. Figure 3 depicts an experimental discussion on Lemma 3.2. Another illuminating model is given by a "spiky" set - e.g. a thin cylinder, which is in some sense the other extreme. We have Lemma 3.3 (Appendix, Subsection A.5). Suppose that E(y) is a cylinder of height h and radius ρ that enters the ball B(x, r). Then τ ↗∞ as ρ↘ 0. Further comparison results for additional model cases are given in Appendix, Subsection A.5. 4 ADVERSARIAL ATTACKS AND DEFENSES Background and set-up We now analyze how strategies for improving adversarial and noise shift robustness affect the decision boundary’s heat diffusion properties. In particular, we keep track of Brownian hitting probabilities ψ and the isocapacitory saturation τ . On one hand, we can view ψ as a capacitory robustness metric against continuous interpolation attacks given by Brownian noise (see also Section 1). On the other hand, Subsection 3.4 indicates how the behaviour of τ reveals deviation from the case of a flat or "spiky" and curvy decision boundary. Our empirical analysis uses the well-known CIFAR10 and MNIST datasets (details, preprocessing and enhancements are given in Appendix, Subsection C.5). For CIFAR10, we used the Wide-ResNet-28-10 (Zagoruyko & Komodakis (2016); Ford et al. (2019)) and ResNets with 32, 44 and 56 layers (He et al. (2016)). For MNIST, we selected a LeNet-5 and additional CNN architectures. Motivated by previous work (e.g. Ford et al. (2019)), we perform 3 types of training: ordinary stochastic gradient descent (ADAM optimization), training with Gaussian noise data augmentation and training with adversarial defense strategies (FGSM and PGD methods, see also Appendix, Section C.4 for details and remarks on robustness). Detailed outline of the numerics behind Brownian motion sampling, isoperimetric/isocapacitory saturation and relative volume sampling are given in Appendix, Subsection C.3. Analysis of results Recent results (Ford et al. (2019); Schmidt et al. (2017)) have shown qualitative differences between the adversarially robust boundaries of MNIST and CIFAR-10, which also impact the experimental findings in this work. In short, a robust decision boundary is in the MNIST case less spiky in comparison to CIFAR. For more details we refer to Appendix, Subsection C.2. In Fig. 4 we collect the statistics of the WRN and LeNet models on CIFAR10 and MNIST, respectively. On one hand, we confirm previous results (Ford et al. (2019); Fawzi et al. (2016)) implying the "flattening-of-boundary" phenomenon: noisy and adversarial training appear to improve and saturate isoperimetric bounds. Furthermore, the ball B(x, r) realizing relative error volume µ of 1% is on average scaled up for adversarial and, especially, noisy training. On the other hand, an intriguing behaviour is observed for the decision boundary’s heat diffusion traits. The isocapacitory saturation τ does not appear to concentrate around the value corresponding to a flat hyperplane: defense training strategies, both FGSM and PGD-based, may not have a significant impact on the behaviour of τ by forcing it to converge to the case of a flat decision boundary (shown as horizontal red punctured line). Put differently, the chance that a continuous Brownian perturbation will find an adversarial example (scaled to the appropriate ball B(x, r)) will not be significantly altered on average (see Appendix, Subsection C.7 for a visual reference). However, it appears that noisy training consistently delivers lower values of τ - intuitively, this is expected as the decision boundary is adjusted in terms of adding Gaussian "blobs", thus naturally being rounder. Geometrically, the sensitivity of τ to small perturbations in almost flat surfaces (Subsection 3.2) indicates that locally around clean (unperturbed) data points an amount of curvature and more complex geometry are still retained. Of course, this amount is not as large as to violate saturation of isoperimetric bounds and robustness comparability results in the sense of Fawzi et al. (2016). For example, in the case of CIFAR10 a simple geometric model surface that has a similar τ -behaviour (as for the adversarial and noisy training) is given in (Appendix, Subsections A.4, A.5): considering a data point x, an almost flat decision boundary that is concavely bent w.r.t. x with approximate curvature of ≈ 1/(12.3r). These observations reveal finer properties concerning decision boundary flattening due to defense training: in particular, noisy training appears to flatten decision boundaries and slightly bend them concavely w.r.t. to the clean data points. Further results for ResNet models and CNN are provided in (Appendix, Subsection C.7). Spiky sets and control on τ In Fig. 4 large outlying values of τ are filtered out. However, values of τ larger than 10 can occupy up to 1.3% for ordinary training and 2.1%, 2.6% for adversarial, noisy training, respectively. It follows, that the geometry of high-dimensional decision boundaries does not admit too many high-curvature (see also Proposition 3.1) spiky regions of low volume and high heat emission (high surface area) in the sense of Subsections 3.2, 3.4. However, it appears that defense training can increase the number of such spiky regions: one might explain such behaviour by seeing defense training as a bundle of additional geometric conditions that sometimes are not able to agree and thus lead to a more degenerate (singular) geometry. Further, with respect to the initial analysis of Fig. 4, a natural question is whether one can control τ along with the isoperimetric saturation - ultimately, one hopes to design better decision boundaries (flatter, or appropriately curved Moosavi-Dezfooli et al. (2019)) eventually leading to more robustness. However, getting a tight control on τ could be a difficult task. It is, indeed, possible to obtain some basic grip on τ : we trained a LeNet-5 architecture on MNIST that exhibited significantly increased τ values and preserved isoperimetric saturation (statistics are shown as the rightmost boxplot in Fig. 4). Similar to many adversarial defenses, the training consisted in augmenting the dataset with attacks given in this case by Brownian paths. However, it seems difficult to force τ to concentrate around the flat-case value, as well as to obtain competitive robustness of the model. On one hand, this is explained via the need to control heat diffusion through Brownian motion - the mentioned naive method is not able to capture the hitting properties sufficiently well; on the other hand, as discussed above heat diffusion properties can be far more sensitive than isoperimetric saturation w.r.t. minor geometric perturbations. 5 GENERALIZATION BOUNDS IN TERMS OF HITTING PROBABILITIES Compression, noise stability and generalization Recent advances (Arora et al. (2018); Suzuki et al. (2018; 2020)) indicate that generalization can be related to compression and noise stability. The guiding strategy is: (1) a large DNN f that is stable against (layer-wise) noise injections admits an effective compression to a simpler model f̃ which is almost equivalent to f . Intuitively, the noise stability absorbs the defects introduced by compression; (2) concentration results imply generalization bounds for f̃ . Admittedly, the generalization estimate is obtained initially for the smaller model; however, it is also possible to "transfer" the bound to f (see the discussion at the end of this Section). In this context a simple observation is that Brownian motion and its hitting probabilities can be related, respectively, to noise injection and margins of classification: small hitting probability of the decision boundary should indicate "margin-safety" and allow to compress parameters of the model more aggressively. However, in contrast to injecting normal noise, Brownian motion, with stopping time given by boundary impacts, is more delicate and requires further analysis of the decision boundary. In the following we propose a theoretical framework that, we hope, will augment and produce further insights into the interplay between noise stability and generalization bounds. The statements are inspired by the results in Arora et al. (2018); Suzuki et al. (2020) and we follow the notation therein. First, we propose several options for goodness of approximation (compression) in the sense of heat diffusion (Appendix, Subsection B.1). We give the following definition: Definition 1. Given a positive real number η, a classifier g is said to be an η−compression of f if∣∣ψEg(y)(x, γ2)− ψEf (y)(x, γ2)∣∣ < η (7) for all points x in the training sample, labels y and real numbers γ. Now, as mentioned above we have the following generalization bounds for the compressed model: Proposition 5.1. Let us suppose that f is approximable by g in the sense of Definition 1. Here g ∈ A, where A is a family of classifiers Rn → R parametrized by q parameters assuming r discrete values. For a classifier h, let Ch(x, y, t) be the event that a Brownian path starting at x hits Eh(y) within time t. Then for t1 ≤ t2 ≤ T we have L0(g) ≤ P(x,y)∼D (Cgα(x, y, t1)) ≤ P(x,y)∼X (Cf (x, y, t2)) + η +O (√ q log r m ) (8) with probability at least 1−e−q log r and L0 denoting the expected loss over the true data distribution. Taking t2 → 0 in (8), one recovers the empirical loss L̂0(f) on the RHS. In other words, the generalization of the smaller model g is controlled by hitting probabilities of the initial model f and corrections related to family capacity. The next natural question is the construction of g. Inspired by Johnson-Lindenstrauss techniques (cf. also Arora et al. (2018)) we are able to recover the following statement (thorough details are given in Appendix, Subsections B.5, B.6): Proposition 5.2 (Informal). Considering a fully connected feed-forward neural network f where some flatness conditions on the layer decision boundaries are fulfilled, there exists an η-compression g in the sense of Def. 1 whose number of parameters is logarithmically smaller than f . Finally, having the generalization estimates on the smaller model g it is natural to attempt transferring those to the initial model f - in Suzuki et al. (2020) this is achieved via certain local Rademacher complexity and "peeling" techniques. However, we choose not to pursue these bounds in the present work and assume the perspective in Arora et al. (2018) that g, being almost equivalent to f , provides a reasonable indicator of generalization capabilities. ACKNOWLEDGMENTS We would like to thank our anonymous reviewers whose advice helped improve the quality of the presentation. We are indebted to Prof. Christian Bauckhage for his constant encouragement, support and fruitful discussions. We also sincerely thank Benjamin Wulff for maintaining the outstanding computation environment at Fraunhofer IAIS - his support and coffee conversations played an essential role for our empirical analysis. In part, this work was supported by the Competence Center for Machine Learning Rhine-Ruhr (ML2R) which is funded by the Federal Ministry of Education and Research of Germany (grant no. 01IS18038B). We gratefully acknowledge this support. A APPENDIX A: HITTING ESTIMATES, SATURATION AND CURVATURE A.1 BROWNIAN MOTION AND BESSEL PROCESSES In this Subsection we introduce some basic background on Brownian motion. Definition 2 (Brownian motion). A real-valued stochastic process {ω(t) : t ≥ 0} is called a one-dimensional Brownian motion started at x ∈ R if the following hold: • ω(0) = x, • the process has independent increments, that is, for 0 ≤ t1 ≤ · · · tm the increments ω(tj)− ω(tj−1) for j = 2, · · · ,m are independent random variables, • for t ≥ 0, h > 0, the increments ω(t+ h)− ω(t) are normally distributed with expectation zero and variance h, • almost surely, the function t 7→ ω(t) is continuous. The process {ω(t) : t ≥ 0} is called a standard Brownian motion if x = 0. Finally, if ω1, · · · , ωn are independent one-dimensional Brownian motions started at x1, · · · , xn then the stochastic process ω(t) = (ω1(t), · · · , ωn(t)) is called an n-dimensional Brownian motion started at x = (x1, · · · , xn). Remark A.1. The distribution of the standard 1-dimensional Brownian motion ω(t) is normal with mean 0 and variance t. It follows that the RMSD (root mean squared displacement) of the standard n-dimensional Brownian motion is √ nt. Sampling Brownian motion simulation is prescribed directly by Definition 2. Given a step size s, number of steps k we sample a Brownian path as ω̂(k) := k∑ i=0 sXi, Xi ∼ N(0, 1). (9) By Definition 2, Var[ω(t)] = t, hence the sampling ω̂ corresponds to running a Brownian motion for time t = ks2. (10) In particular, the mean displacement of ω̂ is s √ nk. In accordance with the main text, Subsection 3.1 and Fig. 2, whenever we need to sample Brownian motion contained within the ball B(x, r) for its lifespan [0, t], we will fix the number of steps k (usually, we set k = 400) and adjust the step size s accordingly, so that r = s √ nk. Estimating hitting probabilities A straightforward empirical way to estimate Brownian hitting probability Pω [∃t0 ∈ [0, t]|ω(t0) ∈ S] of a target set S is to evaluate the steps ω̂(i), i = 0, . . . , k and check whether ω̂(i0) ∈ S for some S. Of course, the precision of this computation depends on the number of sampled Brownian paths ω̂, as well as the step size s and number of steps k. Formal statements on convergence and numerical stability could be obtained, e.g. by means of concentration/Monte-Carlo results (e.g. Proposition B.12 below); however, in practice, in our experiments we mostly worked with the regime k ≈ 104 which seemed an acceptable choice in terms of numeric stability and performance. Explicit closed-form computation of hitting probabilities is a non-trivial task, though it is possible for some model cases (main text, Lemma 3.2). Dimension 1 is special, where we have the so-called "reflection principle", which says that P ( sup 0≤s≤t ω(s) ≥ d ) = 2P (ω(t) ≥ d) . (11) For a proof of this basic statement we refer to Mörters & Peres (2010). However, in higher dimensions, there is no straightforward analog of the reflection principle, and calculating hitting probabilities of spheres leads one to the deep theory of Bessel processes. Let us consider a Brownian particle ω(t) starting at the origin in Rn and look at the real-valued random variable ‖ω(t)‖ (in the literature, these are known as Bessel processes). We are interested in the probability of the particle hitting a sphere {x ∈ Rn : ‖x‖ = r} of radius r within time t. Curiously, it seems that there is no known closed formula for such a hitting probability. The only formula we know of is in the form of a convergent series involving zeros of the Bessel function of the first kind, and appears in Kent (1980). For the reader interested in Kent’s formula, we also refer to associated asymptotics of zeros of the Bessel function in Watson (1944). The following heuristic is implicit in many of our calculations and motivates several of our definitions: the probability P ( sup 0≤s≤t ‖ω(s)‖ ≥ r ) (12) of a Brownian particle hitting a sphere of radius r within time t is dependent only the ratio r2/t. As a consequence, given a small η > 0 and a constant c, one can choose the constant cn in t = cnr2 small enough (depending on η) such that P ( sup 0≤s≤cnr2 ‖ω(s)‖ ≥ cr ) < η. (13) Roughly what this means is the following: for a Brownian particle, the probability of hitting even a large and nearby object may be made arbitrarily small if the motion is not allowed to run sufficiently long. A.2 HEAT DIFFUSION AND BROWNIAN MOTION DUALITY Macroscopic vs microscopic There are roughly two broad viewpoints towards the understanding of diffusion: the “macroscopic” and the “microscopic”. Macroscopically, the mechanism of diffusion can be thought of as creating a flux in the direction from greater to lesser concentration. If u(x, t) measures the intensity of the quantity undergoing diffusion, and J the flux across the boundary of a region Ω, then in the simplest model one assumes that (up to a constant) J = −∇u. Further, we have the identity ∂t ∫ Ω u(x, t) dx = − ∫ ∂Ω ν.−∇u dS, (14) where ν is the outward pointing unit normal vector to ∂Ω. By applying the divergence theorem to (14), one immediately gets the heat equation ∂tu = ∆u. Here ∆ denotes the Laplace operator given by the sum of second derivatives: ∆ = ∑n i=1 ∂ 2 ii. Now, many real-life diffusion processes are the result of microscopic particles jittering around seemingly in a random manner. This motivates the microscopic viewpoint, i.e., the modelling of heat diffusion via Brownian motion of particles. We posit that a particle located at x ∈ Rn at time t0 will have the probability ψU (x, t) of being in an open set U ⊂ Rn at time t0 + t, where ψU (x, t) = ∫ U p(t, x, y) dy, (15) and p(t, x, y) is the fundamental solution of the heat equation, or more famously, the “heat kernel”. In other words, p(t, x, y) solves the heat equation{ (∂t −∆)u(x, t) = 0, u(x, 0) = δ(x− y), (16) with the Dirac delta distribution as the initial condition. Via Fourier transform, it is easy to establish that p(t, x, y) is given by p(t, x, y) = 1 (4πt)n/2 e− |x−y|2 4t . (17) This builds the bridge to pass between analytic statements on the side of the heat equation and probabilistic statements on the side of Brownian motion (see Grigor’Yan (2001), Taylor (2011)). The precise formulation of this duality is given by the celebrated Feynman-Kac theorem discussed in Subsection A.3 below. Heating up the decision boundary In our context we introduce the following heat diffusion process along the classifier’s decision boundary N : (∂t −∆)ψ(x, t) = 0, ψ(x, 0) = 0, ∀x ∈ Rn, ψ(x, t)|x∈N = 1, ∀t > 0. (18) In other words ψ(x, t) gives the heat quantity at the point x at time t given that at the initial moment t = 0 all points have a heat quantity 0 and afterwards a constant heat source of intensity 1 is applied only at the decision boundary N . As remarked above this is the macroscopic picture: the mentioned Feynman-Kac duality implies that ψ(x, t) is also the hitting probability Pω [∃t0 ∈ [0, t]|ω(t0) ∈ N ]. A.3 THE FEYNMAN-KAC THEOREM It is well-known that given a reasonable initial condition u(x, 0) = f(x), one can find an analytic solution to the heat equation via convolution with heat kernel, et∆f(x) := p(t, x, .) ∗ f(.). This just follows from (16) by convolving directly. Now, via the duality of diffusion explained above, one expects a parallel statement on the Brownian motion side, one which computes the contribution of all the heat transferred over all Brownian paths reaching a point at time t. It stands to reason that to accomplish this, one needs an integration theory defined over path spaces, which leads us to the theory of Wiener measures. We describe the main idea behind Wiener measure briefly: consider a particle undergoing a random motion in Rn (given by a continuous path ω : [0,∞) → Rn) in the following manner: given t2 > t1 and ω(t1) = x1, the probability density for the location of ω(t2) is p(t, x, x1) = 1 (4π(t2 − t1))n/2 e − |x−x1| 2 4(t2−t1) . We posit that the motion of a random path for t1 ≤ t ≤ t2 is supposed to be independent of its past history. Thus, given 0 < t1 < · · · < tk, and Borel sets Ej ⊆ Rn, the probability that a path starting at x = 0 at t = 0, lies in Ej at time tj is∫ E1 · · · ∫ Ek p(tk − tk−1, xk, xk−1) · · · p(t1, x1, 0) dxk · · · dx1. The aim is to construct a countably-additive measure on the space of continuous paths that will capture the above property. The above heuristic was first put on a rigorous footing by Norbert Wiener. Using the concept of Wiener measure, one gets the probabilistic (microscopic) description of heat diffusion, which is the content of the celebrated Feynman-Kac theorem: Proposition A.2. Let Ω ⊆ Rn be a domain, with or without boundary (it can be the full space Rn). In case of a boundary, we will work with the Laplacian with Dirichlet boundary conditions. Now, let f ∈ L2(Ω). Then for all x ∈ Ω, t > 0, we have that et∆f(x) = Ex (f (ω(t))φΩ(ω, t)) , (19) where ω(t) denotes an element of the probability space of Brownian paths starting at x, Ex is the expectation with regards to the Wiener measure on that probability space, and φΩ(ω, t) = { 1, if ω([0, t]) ⊂ Ω 0, otherwise. For a more detailed discussion, see Georgiev & Mukherjee (2018a). A.4 ISOPERIMETRIC AND ISOCAPACITORY RESULTS Isoperimetric bounds Isoperimetric inequalities relating the volume of a set to the surface area of its boundary have given rise to a wealth of results Burago & Zalgaller (1988). Given a set M with boundary ∂M , the basic pattern of isoperimetric inequalities is: Vol(M) ≤ c1 Area(∂M) n n−1 , (20) where c1 is an appropriate positive constant depending on the dimension n. In many cases, equality (or saturation in the sense of almost equality) in (20) is characterized by rather special geometry. For example, classical isoperimetric results answer the question, which planar set with a given circumference possesses the largest area, with the answer being the disk. As discussed in the main text, isoperimetric considerations have recently lead to significant insights about decision boundaries of classifiers subject to adversarial defense training mechanisms Ford et al. (2019) by revealing flattening phenomena and relations to robustness. Isocapacitory bounds As mentioned in the main text, one can prove types of isocapacitory bounds that resemble the isoperimetric ones: roughly speaking, these replace the area term with suitable Brownian hitting probabilities. We have the following result (cf. also Georgiev & Mukherjee (2018a)): Proposition A.3. Let B(x, r) ⊂ Rn, n ≥ 3, and let E ⊂ B(x, r) denote an “obstacle”, and consider a Brownian particle started from x. Then the relative volume of the obstacle is controlled by the hitting probability of the obstacle: Vol(E) Vol(B(x, r)) ≤ cn (ψE(x, t)) n n−2 . (21) Here, cn is a positive constant whose value is dependent only on n provided the ratio between r2 and t is suitably bounded. In particular, in the regime r2 = nt, we have that cn = ( Γ ( n 2 − 1 ) /Γ ( n 2 − 1, n 4 )) n n−2 . Here, Γ(s, x) represents the upper incomplete Gamma function Γ(s, x) := ∫ ∞ x e−tts−1 dt. Proof. Recall that the capacity (or more formally, the 2-capacity) of a set K ⊂ Rn defined as Cap(K) = inf η|K≡1,η∈C∞c (Rn) ∫ Rn |∇η|2. (22) From Section 2.2.3, Maz’ya (2011), we have the following “isocapacitory inequality”: Cap(E) ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n , (23) where ωn = 2π n/2 Γ(n2 ) is the (n− 1)-dimensional surface area of Sn−1. Now, we bring in the following estimate given by Theorem 3.7 of Grigor’Yan & Saloff-Coste (2002): ψE(x, t) ≥ Cap(E) ∫ t 0 inf y∈∂E p(s, x, y) ds. (24) Now, we have ψE(x, t) ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n ∫ t 0 1 (4πs) n/2 inf y∈∂E e− |x−y|2 4s ds ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n ∫ t 0 1 (4πs) n/2 e− r2 4s ds = ω2/nn n n−2 n (n− 2)|E| n−2 n 1 4rn−2πn/2 ∫ ∞ r2 4t e−zzn/2−2 dz. After rearrangement the proposed claim follows. Intuitively, it makes sense that if the volume of a set is fixed, one can increase its hitting probability by “hammering” the set into a large thin sheet. However, it seems unlikely that after lumping the set together (as in a ball), one can reduce capacity/hitting probability any further. Moreover, isocapacitory bounds are saturated by the n-ball. It is also illustrative to compare the seemingly allied concepts of capacity and surface area. A main difference of capacity with surface area is the interaction of capacity with hitting probabilities. As an illustrative example, think of a book which is open at an angle of 180◦, 90◦, 45◦ respectively. Clearly, all three have the same surface area, but the probability of a Brownian particle striking them goes from the highest to the lowest in the three cases respectively. It is rather difficult to make the heuristic precise in terms of capacity (at least from the definition). Capacity can be thought of as a soft measure of how "spread out" or "opened-up" a surface is, and is highly dependent on how the surface is embedded in the ambient space. Isocapacitory vs isoperimetric saturation A main line of analysis in the present work addresses the interplay between isocapacitory and isoperimetric saturation. In our particular context of defense training mechanisms we observe saturation of isoperimetric bounds for the classifier’s decision boundaries - this implies that decision boundaries are not far from being flat. However, as mentioned before, it turns out that isocapacitory saturation does not concentrate around the values corresponding to hyperplanes (overall, it seems to stay well below that value). In this sense, isocapacitory saturation acts as a finer sensitive measure of deviation from flatness. A simple model geometric scenario that provides similar behaviour is illustrated in Fig. 5 and Fig. 6. A.5 MODEL CASES We first begin with the proof of Lemma 3.2. Proof. Let us select an orthonormal basis {e1, . . . , en} so that e1 coincides with the given hyperplane’s normal vector. A standard fact about n-dimensional Brownian motion is that the projections on the coordinate axes are again one-dimensional Brownian motions Mörters & Peres (2010). Thus, projecting the n-dimensional Brownian motion onto e1 the hitting probability of the hyperplane is the same as the probability that one-dimensional Brownian motion ω(t) will pass a certain threshold d by time t. To compute this probability we use the reflection principle (11) in conjunction with Remark A.1. Consequently, the RHS is equal to 2Φ(−d/ √ t). The computation of µ(x, r) follows by definition. Here we note that the dimension n enters only in terms of the spherical cap volume. An impression how τ behaves for different choices of n in terms of the distance d is given in Fig. 7. In particular, one observes the well-known concentration of measure phenomenon and Levy’s lemma: the volume of the spherical cap exhibits a very rapid decay as n becomes large. Moreover, experiments reveal a curious phenomenon: there is a threshold distance d0 until which τ ≈ 2 and afterwards τ explodes. In Fig. 8 we plot further interesting model cases where the error set forms a wedge (the region between two intersecting hyperplanes) or a cone. Spiky sets As discussed in the main text, one observes a high isocapacitory saturation τ for the so-called "spiky" sets - these are sets of relatively small volume and relatively large/dense boundary. Theoretically, a guiding model case in this direction is given by Lemma 3.3 in the main text, whose proof we now record. Proof. Let Tρ denote the ρ- tubular neighborhood of a line segment of length h inside Rn. Clearly, Tρ ∼= B(0, ρ)× [0, h], where B(0, r) is a ρ-ball inside Rn−1. By the well-known process of Steiner symmetrization in Rn, it is clear that the expression for capacity in (22) will be minimized by a function that is “radially symmetric” around the central axis of the tube Tρ, that is f(x, y) = f(|x|), where x ∈ B(0, ρ), y ∈ [0, h]. Then, as we scale ρ→ λρ, where λ↘ 0, Cap (Tλρ) ∼ λn−3 Cap (Tρ) (which is seen directly from the definition (22)), whereas the volume scales as |Tλρ| = λn−1 |Tρ|. Now assume that the cylinder Tρ is inside the closed ball B(x, r) ⊂ Rn, the central axis of Tρ is pointing towards x, and Tρ is touching the boundary of B(x, r). To pass from capacity to hitting probability of the set Tρ, we use that Grigor’Yan & Saloff-Coste (2002): Cap(Tρ)r 2 Vol(B(x, r)) e−C r2 t ≤ ψTρ(x, t). (25) Finally, using the definition of τ and putting the above estimates together, one sees that in the time regime of O(r2), τ scales like λ−2/(n−2), and hence, τ ↗∞ as λ↘ 0. See also Figure 8 for a visual discussion of the isocapacitory saturation for the model cases of wedges and cones. A.6 CURVATURE ESTIMATES IN TERMS OF ISOCAPACITORY SATURATION The geometric concept of curvature has a rich history and plays a central role in differential geometry and geometric analysis. There are several notions of curvature in the literature, ranging from intrinsic notions like sectional, Ricci or scalar curvatures to extrinsic (that is, dependent on the embedding) notions like principal curvatures and mean curvature, which are encoded in the second fundamental form. In this note we use a somewhat “soft” definition of curvature, following previous work Fawzi et al. (2016); Dezfooli et al. (2018). Suppose the decision boundary Nf is sufficiently regular (C2 is enough for our purpose) and it separates Rn into two components R1 := {f > 0} and R2 := {f < 0}, corresponding to a binary classification (the construction in the multi-label case is analogous). For a given p ∈ Nf , let rj(p) denote the radius of the largest sphere that is tangent to Nf at p, and fully contained inRj . Then, one defines the curvature κ at p as κ(p) = 1/min (r1(p), r2(p)) . (26) See Fig. 10 for a geometric illustration. However, it turns out that most notions of curvature are quite subtle (see Fawzi et al. (2016)) and at this point, seemingly more cumbersome and intractable to handle experimentally. We will take an indirect approach, and attempt to read off the effect of and on curvature via the isocapacitory saturation τ . Again, we begin with the model cases: we first study the behaviour of curvature κ if τ achieves its least possible value. We start by fixing some notation. As before let us consider a ballB(x, r) with an error set E ⊂ B(x, r) and boundary N = ∂E (clearly our main case of interest is E = E(y) ∩B(x, r)). Let us denote the the distance d = d(x,N ) and suppose the point y ∈ N realizes this distance, i.e. d(x, y) = d. To rule out some degenerate cases and ease the analysis we introduce the following assumption: Assumption: The hypersurface N and the point x are on different sides of the tangent hyperplane H∗ := TyN (cf. Fig. 11). This assumption is also technically important, as otherwise low values of τ will be produced by annuli surrounding x. With that in place, we have the following rigidity result: Proposition A.4. Let us fix the distance d = d(x,N ) and suppose the assumption above holds. Then the least possible value of τ is attained only if the curvature κ of the hypersurface N is 0. Proof. As above letH∗ be the tangent hyperplane at distance d from x, and let C denote the (smaller) spherical cap formed by H∗ ∩B(x, r). The proof relies on the following variational argument. If N is not the same as H∗, then N ⊆ C, with y ∈ N ∩H∗. We wish to argue then one can perturb N infinitesimally to decrease the value of τ , so the only minimizer of the above expression has to be H∗. The basic idea is to cut out a small piece pv around v and paste it in the region of around ṽ (Fig. 11). We say that N has positive curvature at some point z if the ball defining the curvature at z and the point x lie on different sides of N . The construction is as follows. Let S(x, s) be the (n− 1)-sphere centered at x with radius s. We consider two cases: Case I: Let us suppose that there exist s1 < s2 ≤ r and points v, ṽ ∈ N such that the curvature of N at v ∈ N ∩ S(x, s1) is greater than the curvature at ṽ ∈ N ∩ S(x, s2). Let us, moreover, choose the infimum among such s1 and the supremum among such s2. To define the mentioned piece pv , we consider two small balls B(v, ε), B(ṽ, ε) (where ε s2 − s1), and cut out a set pv = E ∩ B(v, ε) such that ∂ (E \B(v, ε)) is congruent to N ∩ B(ṽ, ε) (this is possible due to the curvature assumptions at v, ṽ). Then, we define the new error set E′ = E∪pṽ \pv and the boundaryN ′ = ∂E′, where pṽ represents the image of pv under the rigid motion and attached inside B(ṽ, ε) (see Fig. 11). It is now clear that |E| = |E′|, but ψE′(x, T ) < ψE(x, T ) for all T > 0. The last inequality follows from the evaluation of the explicit heat kernel that defines hitting probability ψ as stated by Feynman-Kac duality: ψE(x, T ) = ∫ T 0 ∫ E 1 (4πt)n/2 e− (x−y)2 4t dy dt > ∫ T 0 ∫ E′ 1 (4πt)n/2 e− (x−y)2 4t dy dt = ψE′(x, T ). It follows from the definition of τ that τE ≥ τE′ . Case II: If Case I is not satisfied, then, similarly, we choose two points v, ṽ, but instead of defining the piece pv by intersection with a small ball around v we select pv as a “concavo-convex lens shape” domain, where the curvature on the concave “inner side” of pv of the lens is greater than that on the convex outer side. As before, we attach a rigid motion image of pv inside B(ṽ, ε). The rest of the argument is similar to Case I. With reference to our previous discussion of spikes, it heuristically makes sense that a spike must have reasonably high curvature (it can have high curvature on the average, or if it is flat at most places, then have a sharp needle like end where the curvature is very high). In the same setting as Proposition A.4 let us, moreover, for simplicity assume that N is the graph of a function over the tangent hyperplane H∗ (Fig. 11). Proposition A.5. In the above setting let us fix the value of d. Then, if the maximum curvature κmax of N is sufficiently high (greater than some universal constant), then it satisfies κmax ≥ τ 1 n r ( Φ ( − d√ t ))− 1n−2 , (27) where Φ denotes the c.d.f. of the standard normal distribution. If a point attaining this maximum curvature is within the half concentric ball B(x, r/2), then κmax satisfies the stronger estimate κmax ≥ τ 1 n (r − d) r n n−1 ( Φ ( − d√ t ))− n (n−1)(n−2) . (28) Proof. Recalling the definition of the isocapacitory saturation τ , we will bound the numerator (resp. denominator) of τ from above (resp. below). First, for the numerator ψE(x, t) we will use a basic monotonicity property of hitting probabilities stating that for two sets A ⊆ B one has ψA(x, t) ≤ ψB(x, t) - this follows directly from the definition of ψ. Now, since E ⊆ C where C is the smaller spherical cap of B(x, r) ∩H∗, we have ψE(x, t) ≤ ψC(x, t). However, recalling the explicit form of ψC from Lemma 3.2 of the main text, we have ψE(x, t) ≤ Φ ( − d√ t ) . Second, to bound the denominator of τ (i.e. Vol(E)), we observe that if κmax is large enough, by definition E contains a ball of radius 1κmax , and Vol(E) ≥ ωn κnmax where ωn denotes the volume of unit n-dimensional ball. That finally implies, τ ≤ ( Φ ( − d√ t )) n n−2 Vol(B(x, r)) Vol(E) ≤ ( Φ ( − d√ t )) n n−2 rnκnmax, which proves (27). If a point of maximum curvature is inside a concentric ball of radius r/2, thenE contains≈ κmax(r−d)2 balls of radius 1κmax , which implies that Vol(E) ≥ κmax(r − d) ( ωn κnmax ) . The rest of the proof is similar. Now, we give a curvature estimate which works in any regime, without any restrictions. The tradeoff is a global average bound of the Lp-type rather than pointwise estimates. Proposition A.6. In the setting as above, let us fix the distance d = d(x,N ). At each point of N , let us denote by κ the maximal sectional curvature of N at that point. The following estimate holds: ‖K‖L1 ≥ Vn(d, r)− 2ωnr nΦ ( − d√ t ) τH , (29) where Vn(d, r) denotes the volume of the smaller spherical cap at distance d, the constant ωn denotes the volume of unit ball in Rn, and the function K is an integral function of the curvature κ over lines (defined in (31) below). Proof. Again, we suitably bound the numerator and denominator of τ . Starting with the numerator, as explained in Proposition A.5, we have by monotonicity ψE(x, t) ≤ 2Φ ( − d√ t ) . (30) To bound the denominator of τ we proceed as follows. Let N be the graph of the function g̃(x1, · · · , xn−1), where the variables xj are taken from the hyperplane H∗ (Fig. 11) at distance d from x; the point at which N touches this hyperplane is taken as the origin. Let ϕ be a smooth cut-off function defined on the hyperplane such that ϕ ≡ 1 on the set S of all (x1, · · · , xn−1) such that g̃(x1, · · · , xn−1) ∈ B(x, r), and ϕ ≡ 0 outside the -tubular neighborhood of S. Finally, let g := ϕ g̃. Now we see that, letting a = (r2 − d2)1/2, Vn(d, r)−Vol(E) ≤ ∫ a ρ=0 ∫ Sn−2 g (ρ, θ) ρ n−2 dρ dθ. Now, if η denotes the unit vector in the direction of a fixed (ρ, θ), observing that g (0) = 0, we have by the fundamental theorem of calculus g (ρ, θ) = ∫ 1 0 ∂tg (tρη, θ) dt. In turn, applying the fundamental theorem a second time and observing that ∇g (0) = 0, we have that g (ρ, θ) = ∫ 1 0 ∫ 1 0 ∂s∂tg (stρη, θ) ds dt. Putting everything together we get, Vn(d, r)−Vol(E) ≤ ∫ a ρ=0 ∫ Sn−2 (∫ 1 0 ∫ 1 0 ∂s∂tg (stρη, θ) ds dt ) ρn−2 dρ dθ. Now, we define the following integral quantity: K (ρ, θ) = ∫ 1 0 ∫ 1 0 |κ (stρη, θ)| ds dt. (31) Noting that the maximum sectional curvature bounds the second derivatives, finally we have that Vn(d, r)−Vol(E) ≤ ‖K ‖L1 . (32) To obtain (29) we now put all the above estimates together and let ↘ 0. B APPENDIX B: GENERALIZATION BOUNDS AND COMPRESSION SCHEMES Background A main line of ML and statistical inference research addresses questions of generalization. To set the stage we start with some notation. Let us suppose that the dataset X is sampled from a probability distribution D, i.e. (x, y) ∼ D. Following conventions from the literature Arora et al. (2018) we define the expected margin loss of a classifier f by Lγ(f) := P(x,y)∼D [ f(x)[y] ≤ γ + max j=1,...,k;j 6=y f(x)[j] ] . (33) We use the notation L̂γ to denote the expected empirical margin loss over the given data set X . Finally, the generalization error is defined as Lγ − L̂γ . Quite roughly speaking, standard generalization results attempt to estimate the performance of the classifier on unseen samples (i.e. the full data distribution), thus yielding bounds of the form: Lγ1(f) ≤ L̂γ2(f) + F (γ1, γ2, f,X ), (34) where F is an additional term that usually depends, e.g. on the size of X , the expressiveness of f and further margin information (γ1, γ2). B.1 COMPRESSION IN A HEAT DIFFUSION SENSE IMPLIES GENERALIZATION BOUNDS We first state a well-known concentration inequality due to Hoeffding which will find repeated use in the ensuing sections: Proposition B.1 (Hoeffding’s inequality). Let X1, . . . , Xn be independent random variables taking values in the interval [0, 1], and let X = 1n (X1 + · · ·+Xn) be the empirical mean of these random variables. Then we have: P ( X − E ( X ) ≥ t ) ≤ e−2nt 2 . (35) We now provide the proof of Proposition 5.1 of the main text. Proof. The strategy of proof follows well-known "weak-law-of-large-numbers" concentration techniques in a spirit similar to Arora et al. (2018). Step 1. First, we show that for a given g as |X | → ∞, P(x,y)∼X (Cg(x, y, t1))→ P(x,y)∼D (Cg(x, y, t1)) , (36) where Cg(x, y, γ2) is the event that a Brownian path starting at x hits Eg(y) within time γ2. The rate of convergence is determined through Chernoff concentration bounds. Choose α ∈ A, and let gα be the corresponding classifier. Attached to each sample point xj , there is a Bernoulli random variable Xj which takes the value 1 if Cgα(xj , y, γ 2) happens, and 0 otherwise. Then, the average X = 1m ∑m j=1Xj is given by the average of m i.i.d. Bernoulli random variables each of whose expectations is given by P(x,y)∼D Cgα(x, y, γ2). Furthermore, we note that if a data sample is misclassified, then the Brownian particle almost surely will hit the error set. Combining this observation with the concentration estimate (35) above, we obtain L0(gα) ≤ P(x,y)∼D ( Cgα(x, y, γ 2) ) ≤ P(x,y)∼X ( Cgα(x, y, γ 2) ) + ξ, (37) with probability at least 1− e−2ξ2m. If each classifier gα has q parameters, each of which can take r discrete values, we take ξ = √ q log r m . Step 2. The estimate from the previous step should hold for every classifier gα in the family A with large probability. This is guaranteed by a union bound and tuning the Chernoff bounds from the convergence rate. More precisely, there are rq different choices α ∈ A, and hence by taking the union of the estimate in (37), one can say that P(x,y)∼D ( Cgα(x, y, γ 2) ) ≤ P(x,y)∼X ( Cgα(x, y, γ 2) ) + √ q log r m (38) with probability at least 1− e−q log r over all α ∈ A. Step 3. Finally one uses the fact that f is approximable by at least one g = gα0 for some α0 in A. Via Definition 1 of the main text, one sees that P(x,y)∼X ( Cgα0 (x, y, γ 2) ) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) + η, which finally gives that with probability at least 1− e−q log r, we have L0(g) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) + η +O (√ q log r m ) . (39) Remark B.2. As noted, a classifier f classifies a point x wrongly if and only if ψE(y)(x, t) = 1 for all time scales t. With this observation, and since (39) works for all real numbers γ, letting γ → 0, we have that with probability at least 1− e−q log r, L0(g) ≤ L̂0(f) + η +O (√ q log r m ) . This recovers a loss estimate which is similar to the estimate in Theorem 2.1 of [1]. Indeed, one can consider P(x,y)∼X ( Cf (x, y, γ 2 ) as a “soft” or probabilistic measure of classification with margin ≈ γ. When defining the notion of a compression, instead of taking a pointwise difference as in Definition 1 of Arora et al. (2018), we would like to capture the idea that the decision boundary of a good compression should be “close enough” to the decision boundary of the original classifier. In our context, this implies that their “heat signatures” at the sample points should be close enough at all time scales. As noted in the main text, Definition 1 is definitely one natural option to define goodness of compression in a heat-diffusion sense. Another natural way is to consider the Brownian motion’s running time and define a good approximation as follows: Definition 3. Given a positive real number η, a classifier g is said to be an η−compression w.r.t. hitting time of f if ψEg(y)(x, γ 2 − η) ≤ ψEf (y)(x, γ 2) ≤ ψEg(y)(x, γ 2 + η) (40) for all points x in the training sample, labels y and real numbers γ2 ≥ η. Analogously, we have the following Proposition B.3. Let us suppose that f is approximable by g in the sense of Definition 3. Here g ∈ A, where A is a family of classifiers Rn → R parametrized by q parameters assuming r discrete values. As before, for a classifier h, let Ch(x, y, t) be the event that a Brownian path starting at x hits Eh(y) within time t. Then we have L0(g) ≤ P(x,y)∼D ( Cgα(x, y, γ 2 − η) ) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) +O (√ q log r m ) (41) with probability at least 1− e−q log r. The proof proceeds similarly as above. Letting γ2 → η gives us L0(g) ≤ P(x,y)∼X (Cf (x, y, η)) +O (√ q log r m ) . (42) Again, the first term on the RHS can be interpreted as the geometric margin of classification. In particular, if the classifier f separates points by a distance of≈ √nη, then since the Brownian motion travels ≈ √nη hitting the error set will happen only if a misclassification occurred, i.e. we have P(x,y)∼X (Cf (x, y, η)) ≈ L0(f). (43) B.2 A SHARP VARIANT OF THE JOHNSON-LINDENSTRAUSS ALGORITHM Several state-of-art compression schemes utilize a dimensionality reduction in the spirit of JohnsonLindenstrauss (JL), Arora et al. (2018). In this Subsection we discuss a JL compression scheme that will later be coupled with and tuned by some heat-diffusion estimates. We begin by discussing a variant of JL (Alg. 1). Data: Original matrix A of dimension h1 × h2, β ∈ (0, 1). Result: Stochastic compressed matrix  with O ( log(h1h2)/βα 2 ) non-zero entries such that P [ ‖Âx−Ax‖ ≥ α‖A‖F ‖x‖ ] ≤ β. Start with matrix A, real number α; while i ≤ h1, j ≤ h2 do Let zij = 1 with probability pij = 2a2ij βα2‖A‖2F , 0 otherwise; Let âij = zijaij pij . end Return  = (âij). Algorithm 1: Compressing a matrix A ∈ Rh1×h2 Proposition B.4. Let A be a matrix of dimension h1 × h2. Then, one can find a compressed matrix  such that ‖Ax− Âx‖ ≤ α‖A‖F ‖x‖, with probability at least 1− β, where the number of parameters of  is O ( log(h1h2)/βα 2 ) . A proof of Proposition B.4 in the spirit of classical JL can be provided - however, here we introduce a Bernoulli scheme which is a minor modification of Algorithm 2 of Arora et al. (2018). Proof. Define the random variables zij which take the value 1 with probability pij = 2a2ij βα2‖A‖2F , and the value 0 otherwise. Define âij = zijaij pij . One can now calculate that E (âij) = aij , and Var (âij) ≤ βα2‖A‖2F . Using the above, one can further calculate that E(Âx) = Ax, and Var(Âx) ≤ ‖x‖2‖A‖2Fβα2. By Chebyshev’s inequality, this gives us that P [ ‖Âx−Ax‖ ≥ α‖A‖F ‖x‖ ] ≤ β. Now, the expected number of non-zero entries in  is ∑ i,j pij = 2 βα2 . An application of Chernoff bounds now gives that with high probability the number of non-zero entries is O ( log(h1h2)/βα 2 ) . B.3 HITTING PROBABILITY, CAPACITY SENSITIVITY AND COMPRESSION As discussed in the main text, here we use hitting probabilities associated to the decision boundary to define a concept “capacity sensitivity” of a neural net layer. The heuristic is, the less the capacity sensitivity of a layer, the greater the facility in compressing the layer to one with fewer parameters. This goes in the spirit of current state-of-art results on compression and generalization bounds (Arora et al. (2018), Suzuki et al. (2018), Suzuki et al. (2020)). In particular, in Arora et al. (2018) the authors provide the notions of noise sensitivity and noise cushions motivated by Gaussian noise injections. Our first proposed definition for "heat-diffusion noise cushions" and capacity sensitivity goes as follows: Definition 4. Let η ∼ N be distributed along a noise distribution N concentrated in ball ‖η‖ ≤ η0. We define the capacity sensitivity S(x,Ai; t) of a layer Ai at the point x as S(x,Ai; t) := Eη∼N ∣∣ψEf (φ(Ai(x+ ‖x‖η)), t)− ψEf (φ(Aix), t)∣∣∣∣ψEf (φ(Aix), t)∣∣ . (44) We denote the maximum and expected sensitivity respectively as Sm(Ai; t
1. What are the main contributions and novel aspects of the paper in terms of geometric measures and their applications to neural networks? 2. How do the proposed measures relate to Brownian motion or heat diffusion probabilities, and what insights do they provide into the shape of decision boundaries? 3. What are the strengths and weaknesses of the paper regarding its empirical findings and analytical bounds? 4. Do you have any concerns or questions about the interpretation of the results, specifically regarding the comparison between adversarially trained networks and ordinary trained networks? 5. Are there any limitations or areas for improvement in the proposed approach that could be explored in future research?
Review
Review The paper under review introduces a number of geometric measures (isoperimetric, isocapacitory ratios that relate to Brownian motion or heat diffusion probabilities) that are applied to study neural network decision boundaries locally. Specifically, the studies applying the measures to study adversarially trained NN empirically, and there are generalization and network compression bounds analytically proven that are derived that relate to Brownian motion probabilities. Empirical observations on LeNet and Wide ResNet trained on MNIST and CIFAR showed adversarially trained or noise trained networks did exhibit curvature of the decision boundary, showing finer structure than previously known. The paper is clear and the application of isoperimetric and isocapacitory measures appears novel, as well as the empirical finding of curvature (determined through the isocapacitory measure) of adversarially trained NNs provides some new insight into the shape of decision boundaries of NN for robustness. I didn't find the empirical results very convincing: the isoperimetric / isocapacitory measures do not show a clear distinction for adversarially trained networks, which appears close to the ordinary trained networks. Its not clear whether the training methods are not sufficient to produce a robust enough NN or the measures introduced do not adequately describe the adversarially trained nets. All seem to exhibit curvature, but I'd assume the adversarially trained ones exhibit less curvature, which is not the case from the expt.
ICLR
Title Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds Abstract In the present work we study classifiers’ decision boundaries via Brownian motion processes in ambient data space and associated probabilistic techniques. Intuitively, our ideas correspond to placing a heat source at the decision boundary and observing how effectively the sample points warm up. We are largely motivated by the search for a soft measure that sheds further light on the decision boundary’s geometry. En route, we bridge aspects of potential theory and geometric analysis (Maz’ya (2011); Grigor’Yan & Saloff-Coste (2002)) with active fields of ML research such as adversarial examples and generalization bounds. First, we focus on the geometric behavior of decision boundaries in the light of adversarial attack/defense mechanisms. Experimentally, we observe a certain capacitory trend over different adversarial defense strategies: decision boundaries locally become flatter as measured by isoperimetric inequalities (Ford et al. (2019)); however, our more sensitive heat-diffusion metrics extend this analysis and further reveal that some non-trivial geometry invisible to plain distance-based methods is still preserved. Intuitively, we provide evidence that the decision boundaries nevertheless retain many persistent "wiggly and fuzzy" regions on a finer scale. Second, we show how Brownian hitting probabilities translate to soft generalization bounds which are in turn connected to compression and noise stability (Arora et al. (2018)), and these bounds are significantly stronger if the decision boundary has controlled geometric features. 1 INTRODUCTION AND BACKGROUND The endeavor to understand certain geometric aspects of decision problems has lead to intense research in statistical learning. These range from the study of data manifolds, through landscapes of loss functions to the delicate analysis of a classifier’s decision boundary. In the present work we focus on the latter. So far, a wealth of studies has analyzed the geometry of decision boundaries of deep neural networks (DNN), reaching profound implications in the fields of adversarial machine learning (adversarial examples), robustness, margin analysis and generalization. Inspired by recent isoperimetric results and curvature estimates (Ford et al. (2019); Moosavi-Dezfooli et al. (2019); Fawzi et al. (2016)), we attempt to provide some new aspects of decision boundary analysis by introducing and studying a corresponding diffusion-inspired approach. In this note the guiding idea is to place a heat source at the classifier’s decision boundary and estimate its size/shape in terms of the amount of heat the boundary is able to emit within a given time (Fig. 1). The goal is to extract geometric information from the behavior of heat transmission. This technique of heat content seems well-known within capacity/potential theory and has led to a variety of results in spectral analysis relating heat diffusion and geometry, Jorgenson & Lang (2001); Grigor’Yan & Saloff-Coste (2002); Maz’ya (2011). However, working with such heat diffusion directly in terms of the corresponding differential equations is impractical. To this end, we note that, due to Feynman-Kac duality, the heat estimates are convertible to Brownian motion hitting probabilities. Thus we circumvent the need for solving intractable differential equations and instead are able to employ a straightforward Monte-Carlo sampling scheme in the ambient data space (Section 3). Background on defense training We apply the above analysis in the context of adversarial machine learning (Section 4) where one studies the interaction between an adversary and a ML system. One of the goals of the subject is to design attack/defense training strategies improving the robustness of a given ML model - in the present work we are interested in how adversarial/noise defense training are reflected geometrically. Many different metrics to estimate robustness have been proposed: on one hand, there is adversarial robustness (the probability that error samples lie very near a given data point x); on the other hand, there is corruption robustness (the probability of getting an error sample after perturbing a given data point x with some specified noise). In our context, heat diffusion naturally suggests a capacitory robustness metric: this metric is built upon the probability that Brownian motion started at a given data point x will hit error samples within a given time window. One can perceive this metric as a combination of adversarial and noise robustness (Brownian motion has continuous paths and specified stopping time determined by boundary impact). In this perspective, our work is aligned with studies of other robustness metrics and curvature results (cf. Fawzi et al. (2016) for a "semi-random" projection robustness and relations to curvature). We study the capacitory metric on the well-known CIFAR10 and MNIST datasets and observe that defense training techniques may either yield a certain (although not substantial) decrease (noise training) or fail to have a significant effect on continuous Brownian attacks overall. Surprisingly, in both cases the studied capacitory metric does not converge to the corresponding value as in the case of a flat decision boundary. Due to our comparison statements and curvature considerations, this means that locally around clean data points the geometry is in general flattened out but may still retain complexity and substantial areas of (small) non-vanishing curvature. In other words, from the point of view of our heat diffusion metrics, decision boundaries locally exhibit non-flat behaviour. Background on generalization estimates Finally, we observe that the collected heat/hittingprobability metrics can further be used to obtain generalization bounds where, in a nutshell, one evaluates the performance of a model on unseen data in terms of the performance over a given sampled data, the model’s expressiveness, dimension, etc. In this regard, we view decision boundary heat diffusion traits as an indicator of how noise-stable a given model is - this relates Brownian hitting bounds with recent compression-based generalization techniques in the spirit of Arora et al. (2018); Suzuki et al. (2018; 2020). More precisely, we proceed in two steps: first, we construct a "smaller" compressed model that is almost equivalent to the initial one in an appropriate heat-theoretic way; second, we obtain generalization estimates for the smaller model in terms of the decision boundary hitting probabilities (computed on the empirical dataset). Furthermore, the bounds are significantly improved under additional geometric assumptions on the decision boundary of the initial model. Additional related work The interplay between heat diffusion and geometry lies at the heart of many topics in geometric analysis and spectral theory (cf. Jorgenson & Lang (2001); Grigor’Yan (2001) for a far reaching overview). Some direct applications of heat diffusion techniques to zero sets of eigenfunctions are seen, for example, in Steinerberger (2014); Georgiev & Mukherjee (2018a;b). The literature on adversarial ML is vast: to name a few central works in the field, we refer to Dalvi et al. (2004); Biggio & Roli (2018); Szegedy et al. (2014). Much effort has been invested in designing and understanding strategies that will render a model robust to various attacks (e.g. Madry et al. (2018); Carlini & Wagner (2017)). In particular, the geometry of decision boundaries has been the focus of many works in the subject leading to breakthroughs in curvature estimates, boundary flatness and robustness, schemes for detecting boundary complexity, proposing adversarial attacks/defenses and diffusion based techniques towards constructing decision boundary from partially pre-labelled data (e.g. Ford et al. (2019); Fawzi et al. (2016; 2017; 2018); Dezfooli et al. (2018); Moosavi-Dezfooli et al. (2019); Karimi et al. (2019); Karimi & Tang (2020); He et al. (2018); Szlam et al. (2008)). The theory of generalization bounds has formed a classical main line of ML and statistical inference research (Vapnik (1999)). In this direction central questions address the generalization properties of heavily over-parametrized deep neural network models. According to some classical VC-dimension results such models should overfit the data and generalize poorly. Extensive research effort has been invested in developing appropriate sharper techniques to explain generalization of DNN models: on one hand there are the methods based on norm estimation whose bounds are not explicitly using the number of the network’s parameters (see Golowich et al. (2019); Neyshabur et al. (2015; 2018); Wei & Ma (2019); Bartlett et al. (2017), etc). On the other hand, recent results based on compression and VC-dimension can lead to sharper bounds (Arora et al. (2018); Suzuki et al. (2018; 2020)). 2 CONTRIBUTIONS, CONTEXT AND PAPER OUTLINE An outline of our essential contributions is given as follows: 1. We analyze decision boundary geometries in terms of novel heat diffusion and Brownian motion techniques with thorough theoretical estimates on curvature and flattening. 2. We show, both theoretically and empirically (in terms of adversarial scenarios on stateof-art DNN models), that the proposed heat diffusion metrics detect the curvature of the boundary; they complement, and in some respects are more sensitive in comparison to previous methods of boundary analysis - intuitively, our heat driven metrics are sharper on a finer scale and can detect small-scale "wiggles and pockets". As an application, we are thus able to provide evidence that adversarial defenses lead to overall flatter boundaries but, surprisingly, the heat traits do not converge to the corresponding flat-case, and hence, finer-scale non-linear characteristics (e.g. "wiggles and pockets") are persistent. 3. Moreover, the preservation of "wiggles and pockets" means that susceptibility to naive Brownian motion attacks is not significantly decreased via adversarial defense mechanisms. 4. Finally, we introduce a novel notion of compression based on heat diffusion and prove that stability of heat signature translates to compression properties and generalization capabilities. In terms of context, the present note is well-aligned with works such as Ford et al. (2019); Dezfooli et al. (2018); Fawzi et al. (2016; 2018). Among other aspects, these works provide substantial analysis of the interplay between geometry/curvature and adversarial robustness/defenses - in particular, we use some of the these tools (e.g. isoperimetric saturation) as benchmarks and sanity checks. However, in contrast, in our work we provide a non-equivalent technique to address decision boundary geometry for which we provide an extensive theoretical and empirical evaluation with insights on the preservation of finer-scale traits. Intuitively, previous distance-based geometric methods could be considered as a "coarser lens", whereas the present heat-diffusion tools appear to be much more sensitive. As a large-scale example, Brownian particles emanating from a point are able to distinguish between a decision boundary which is a hyperplane at distance d and a decision boundary which is a cylinder of radius d wrapping around the point. Our notion of compression is inspired by Arora et al. (2018), and establishes a connection between the Johnson-Lindenstrauss dimension reduction algorithm with diffusion techniques. Furthermore, we bridge the proposed heat-theoretic techniques with generalization bounds in the spirit of Arora et al. (2018); Suzuki et al. (2020). In particular, this shows that overall lower heat quantities at sample points imply better generalization traits. A step-wise road map of the present work is given below: • (Subsection 3.1) We start by discussing what heat diffusion is and how it is to be evaluated - here we discuss that, via Feynman-Kac duality, one can essentially work with Brownian motion hitting probabilities. • (Subsections 3.2 and 3.3) We introduce the isocapacitory saturation τ - a heat-theoretic metric that will be used to estimate boundary flatness. Moreover, here we emphasize the properties of τ such as relations to curvature (Proposition 3.1) and the novel information obtained from heat theoretic methods in comparison to previous distance-based ones. • (Subsection 3.4) We compute τ for certain geometric model cases such as hyperplanes, cones, wedges and "spiky" sets (Lemmas 3.2 and 3.3). This allows us later to evaluate how much a given geometry resembles these model cases. • (Section 4) Next, we are in a position to evaluate and compare τ for decision boundaries of DNNs. We experimentally illustrate the effect of adversarial defense mechanisms and noise robustness on τ (PGD/FGSM on MNIST and CIFAR-10). • (Section 5) We prove that heat transmission relates to generalization bounds (Propositions 5.1 and 5.2) - in particular, lower levels of heat at sample points yield sharper generalization bounds. Finally, we complete the discussion by informally stating our compression scheme. • (Appendix) Our methods leverage several tool sets extensively. For this reason our goal in the main text is to only collect and showcase the techniques and results. However, the thorough in-depth analysis is provided in the Appendix where the reader can find all relevant proofs and further background and references. 3 MOTIVATION AND MAIN IDEAS 3.1 GEOMETRY SEEN THROUGH BROWNIAN MOTION AND DIFFUSION Notation Let us consider a dataset X := {(xi, yi)}mi=1 consisting of feature points xi ∈ Rn and their corresponding labels y ∈ {1, . . . , k}. Let us suppose that a k-label classifier f : Rn → Rk labels a point x ∈ X as arg maxi∈[1,k] f(x)[i]. The decision boundary of f is given by N := {x ∈ Rn|f(x) has two or more equal coordinates} (cf. Fig. 2). Assuming f is sufficiently regular, one thinks of N as a collection of hypersurfaces in Rn. Further, for a given target label y we define the target (error) set E(y) as the set of points on which the classifier’s decision is different from y, i.e. E(y) := {x ∈ Rn| arg maxi∈[1,k] f(x)[i] 6= y} (here we remark that if arg max is set-valued at x with several coordinates obtaining the maximum value, then by convention x is contained in E(y)). Clearly, if a given data sample (x0, y0) ∈ X is correctly classified by f , then x0 is outside of the error set E(y0). Finally, we note that the boundary of E(y) coincides with E(y) ∩N and moreover, N is the union of the boundaries of E(y) for all labels y. Feynman-Kac duality and hitting probabilities As mentioned in Section 1 we wish to study a heat diffusion process where we place a heat source at the decision boundary N : formally, this is given by a heat equation with appropriate initial and boundary conditions (Appendix, Subsection A.2). Avoiding the impracticality of working with the differential equations directly, we bring forward the theorem of Feynman-Kac that relates the solution of the diffusion process to hitting probabilities of Brownian motion (Appendix, Subsection A.3). By way of notation, for an open set U ⊆ Rn, let ψU (x, t) denote the probability that a Brownian particle starting at the point x will enter U within time t. In other words, ψU (x, t) := Pω∼W [∃ t0 ∈ [0, t] | ω(t0) ∈ U ] , x ∈ X , (1) where ω denotes a Brownian motion defined over the interval [0, t] that follows the standard Euclidean Wiener distribution. The amount of heat that a point x receives from N within time t is comparable to the hitting probability that a Brownian particle starting at x will impact the boundary within time t (cf. Fig. 2). Provided that x is correctly classified this is equivalent to the probability of impacting the decision boundary. In general, we evaluate ψE(y)(x, t) (which we often denote by ψ(x, t) by minor abuse of notation) through direct sampling; however, in some model cases, e.g. E(y) being a half-space, a spherical shell or a conical set, ψ(x, t) has a concise closed form (Subsection 3.4 below) that can be evaluated analytically. This allows us to easily measure deviations and compare the heat imprint of N to particular model cases. Local analysis and set-up As mentioned above our analysis is local. For each clean data point x we consider a ball B(x, r) centered at x with radius r and perform all our computations there. In particular, a free Brownian motion starting at x and defined over a maximal time interval [0, t] will on average travel a distance of √ nt (Appendix, Subsection A.1). This suggests to couple r and the maximal Brownian running time t via r = √ nt (cf. Fig. 2), so that, if not stopped by boundary impact, Brownian motion will, on average, reach the sphere ∂B(x, r) by its maximal stopping time. 3.2 AN ISOPERIMETRIC AND ISOCAPACITORY PERSPECTIVE Isoperimetric results Isoperimetric estimates will be the starting baseline (Ford et al. (2019)) to detect low levels of curvature and boundary flatness. For some background in isoperimetric results we refer to (Appendix, Subsection A.4). Let us start by defining the relative error volume µ(x, r) := Vol(E(y) ∩B(x, r)) Vol(B(x, r)) . (2) We recall the so-called Gaussian isoperimetric inequality Borell (1975); Ford et al. (2019): d̃ ≤ −rΦ −1(µ)√ n , µ ≤ 1/2, (3) where Φ−1 denotes the inverse standard normal c.d.f. and where d̃ = d(x̃,Nf ) denotes the median distance with x̃ varying normally and concentrated in the ball B(x, r), and d̃ = 0 if µ ≥ 1/2. Here the isoperimetric result is rigid in the sense that equality in (3) occurs only if E(y) is a half-space. In Ford et al. (2019) the authors demonstrate that defense training mechanisms lead to decision boundaries that saturate this isoperimetric inequality, i.e. in this isoperimetric sense, the decision boundary N becomes locally closer to being a flat hyperplane. We define the ratio between the LHS and RHS in eq. (3) as the isoperimetric saturation. Isocapacitory results In our context of hitting probabilities (eq. (1)), results in potential theory allows us to prove isocapacitory bounds which are similar in spirit to isoperimetric bounds. More precisely one has: µ(x, r) ≤ cn ψ(x, t) n n−2 , (4) where cn is an appropriate constant depending on the dimension n, and r = √ nt. The proof relies on potential theory tools (capacity) and can be found in Appendix, Proposition A.3. Motivated by the above isoperimetric saturation results, one of our main goals is to study how µ compares to ψ(x, t). To this end we define the isocapacitory saturation τ as τ(x, r) := ψ(x, t) n n−2 µ(x, r) . (5) The basic guiding heuristic is that high values of τ indicate that E(y) has a very low volume in comparison to its boundary size and respective heat emission. This is the case whenever E(y) is a very thin region with a well-spread boundary of large surface area - e.g. a set that resembles thin spikes entering the ball B(x, r). In contrast, lower values of τ should indicate a saturation of the isocapacitory inequality (4) and imply that E(y) has a volume that is more comparable to its heat emission - e.g. thicker sets with tamer boundary. To quantify this intuition, we explicitly evaluate τ for some model scenarios (Subsection 3.4). 3.3 THE NOVEL INFORMATION GIVEN BY HEAT DIFFUSION Distances vs. hitting probabilities As discussed above, several works investigate decision boundaries in terms of distance-based analysis (Ford et al. (2019); Fawzi et al. (2016); Karimi & Tang (2020); Karimi et al. (2019)). We remark that our analysis based on hitting probabilities augments and extends the mentioned distance-based approaches. Although related, the two concepts are not equivalent. A guiding example is given by E(y) being a dense collection of "thin needles" (Appendix, Subsections A.4, A.5); in such a scenario the average distance to N is very small, as well as the chance a Brownian particle will hit N . On the other hand, if N is a dense collection of hyperplanes, the average distance toN is again small, but Brownian motions almost surely will hitN . In this sense, evaluating hitting probabilities yields a different perspective than is available from distance-based analysis and sheds further light on the size and shape of the decision boundary, particularly with regards to its capacity and curvature features. Isoperimetric vs. isocapacitory saturation Another demonstration of the additional information obtained through τ is given by almost flat shapes in higher dimensions that saturate isoperimetric bounds (Appendix, Subsection A.4). In these scenarios small geometric deformations can have a significant impact on τ , and at the same time almost preserve isoperimetric bounds. In other words τ provides an additional level of geometric sensitivity. We discuss this further in Section 4. The effect of curvature The interplay between curvature of the decision boundary and robustness has been well studied recently, e.g. Fawzi et al. (2016); Moosavi-Dezfooli et al. (2019) where various forms of robustness (adversarial, semi-random and their ratio) have been estimated in terms of the decision boundary’s curvature. Intuitively, the differential geometric notion of curvature measures how a certain shape is bent. The precise definition of curvature involves taking second-order derivatives which is in most cases impractical. However, in our context we show that the isocapacitory saturation τ implies certain curvature bounds. These statements exploit relations between curvature and volume and lead to pointwise and integral curvature bounds. As an illustration, we have: Proposition 3.1 (Informal). Let (x, y) ∈ X be a data sample. Then, provided that the distance d(x,N ) is kept fixed, larger values of τ locally imply larger pointwise/integral curvature values. A deeper analysis with formal statements and additional details are provided in Appendix, Subsection A.6. The advantages that curvature yields for some types of compression schemes and generalization bounds is also intensely investigated in Appendix, Section B. 3.4 MODEL DECISION BOUNDARIES: HYPERPLANES, WEDGES, CONES AND “SPIKY” SETS Given a certain geometric shape, one is often faced with questions as to how flat or spherical the given geometry is. To this end, a central technique in geometric analysis is comparing to certain model cases - e.g. a sphere, plane, saddle, etc. After having introduced τ and its basic traits we now evaluate it for several model cases (flat hyperplanes, wedges, cones, balls and "spiky" sets). Each of these model cases illustrates a distinguished τ -behaviour: from "tame" behaviour (hyperplanes, balls) to explosion (thin cylinders, "needles and spiky" sets). Hence, having comparisons to these model cases and given an decision boundary, one can, quantify how far away is the given surface from being one of the models. We start by discussing the flat linear case: Lemma 3.2. Let (x, y) be a data sample and suppose that E(y) forms a half-space at a distance d from the given data point x ∈ Rn. Then τ(x, r) = 2 Φ ( − d√ t ) Vol (B(x, r)) Vn(d, r) , (6) where Φ(s) is the c.d.f. for the standard normal distribution, and Vn(d, r) is the volume of the smaller n-dimensional solid spherical cap cut-off at distance d from the center of a ball of radius r. The computation uses standard reflection principle techniques. Figure 3 depicts an experimental discussion on Lemma 3.2. Another illuminating model is given by a "spiky" set - e.g. a thin cylinder, which is in some sense the other extreme. We have Lemma 3.3 (Appendix, Subsection A.5). Suppose that E(y) is a cylinder of height h and radius ρ that enters the ball B(x, r). Then τ ↗∞ as ρ↘ 0. Further comparison results for additional model cases are given in Appendix, Subsection A.5. 4 ADVERSARIAL ATTACKS AND DEFENSES Background and set-up We now analyze how strategies for improving adversarial and noise shift robustness affect the decision boundary’s heat diffusion properties. In particular, we keep track of Brownian hitting probabilities ψ and the isocapacitory saturation τ . On one hand, we can view ψ as a capacitory robustness metric against continuous interpolation attacks given by Brownian noise (see also Section 1). On the other hand, Subsection 3.4 indicates how the behaviour of τ reveals deviation from the case of a flat or "spiky" and curvy decision boundary. Our empirical analysis uses the well-known CIFAR10 and MNIST datasets (details, preprocessing and enhancements are given in Appendix, Subsection C.5). For CIFAR10, we used the Wide-ResNet-28-10 (Zagoruyko & Komodakis (2016); Ford et al. (2019)) and ResNets with 32, 44 and 56 layers (He et al. (2016)). For MNIST, we selected a LeNet-5 and additional CNN architectures. Motivated by previous work (e.g. Ford et al. (2019)), we perform 3 types of training: ordinary stochastic gradient descent (ADAM optimization), training with Gaussian noise data augmentation and training with adversarial defense strategies (FGSM and PGD methods, see also Appendix, Section C.4 for details and remarks on robustness). Detailed outline of the numerics behind Brownian motion sampling, isoperimetric/isocapacitory saturation and relative volume sampling are given in Appendix, Subsection C.3. Analysis of results Recent results (Ford et al. (2019); Schmidt et al. (2017)) have shown qualitative differences between the adversarially robust boundaries of MNIST and CIFAR-10, which also impact the experimental findings in this work. In short, a robust decision boundary is in the MNIST case less spiky in comparison to CIFAR. For more details we refer to Appendix, Subsection C.2. In Fig. 4 we collect the statistics of the WRN and LeNet models on CIFAR10 and MNIST, respectively. On one hand, we confirm previous results (Ford et al. (2019); Fawzi et al. (2016)) implying the "flattening-of-boundary" phenomenon: noisy and adversarial training appear to improve and saturate isoperimetric bounds. Furthermore, the ball B(x, r) realizing relative error volume µ of 1% is on average scaled up for adversarial and, especially, noisy training. On the other hand, an intriguing behaviour is observed for the decision boundary’s heat diffusion traits. The isocapacitory saturation τ does not appear to concentrate around the value corresponding to a flat hyperplane: defense training strategies, both FGSM and PGD-based, may not have a significant impact on the behaviour of τ by forcing it to converge to the case of a flat decision boundary (shown as horizontal red punctured line). Put differently, the chance that a continuous Brownian perturbation will find an adversarial example (scaled to the appropriate ball B(x, r)) will not be significantly altered on average (see Appendix, Subsection C.7 for a visual reference). However, it appears that noisy training consistently delivers lower values of τ - intuitively, this is expected as the decision boundary is adjusted in terms of adding Gaussian "blobs", thus naturally being rounder. Geometrically, the sensitivity of τ to small perturbations in almost flat surfaces (Subsection 3.2) indicates that locally around clean (unperturbed) data points an amount of curvature and more complex geometry are still retained. Of course, this amount is not as large as to violate saturation of isoperimetric bounds and robustness comparability results in the sense of Fawzi et al. (2016). For example, in the case of CIFAR10 a simple geometric model surface that has a similar τ -behaviour (as for the adversarial and noisy training) is given in (Appendix, Subsections A.4, A.5): considering a data point x, an almost flat decision boundary that is concavely bent w.r.t. x with approximate curvature of ≈ 1/(12.3r). These observations reveal finer properties concerning decision boundary flattening due to defense training: in particular, noisy training appears to flatten decision boundaries and slightly bend them concavely w.r.t. to the clean data points. Further results for ResNet models and CNN are provided in (Appendix, Subsection C.7). Spiky sets and control on τ In Fig. 4 large outlying values of τ are filtered out. However, values of τ larger than 10 can occupy up to 1.3% for ordinary training and 2.1%, 2.6% for adversarial, noisy training, respectively. It follows, that the geometry of high-dimensional decision boundaries does not admit too many high-curvature (see also Proposition 3.1) spiky regions of low volume and high heat emission (high surface area) in the sense of Subsections 3.2, 3.4. However, it appears that defense training can increase the number of such spiky regions: one might explain such behaviour by seeing defense training as a bundle of additional geometric conditions that sometimes are not able to agree and thus lead to a more degenerate (singular) geometry. Further, with respect to the initial analysis of Fig. 4, a natural question is whether one can control τ along with the isoperimetric saturation - ultimately, one hopes to design better decision boundaries (flatter, or appropriately curved Moosavi-Dezfooli et al. (2019)) eventually leading to more robustness. However, getting a tight control on τ could be a difficult task. It is, indeed, possible to obtain some basic grip on τ : we trained a LeNet-5 architecture on MNIST that exhibited significantly increased τ values and preserved isoperimetric saturation (statistics are shown as the rightmost boxplot in Fig. 4). Similar to many adversarial defenses, the training consisted in augmenting the dataset with attacks given in this case by Brownian paths. However, it seems difficult to force τ to concentrate around the flat-case value, as well as to obtain competitive robustness of the model. On one hand, this is explained via the need to control heat diffusion through Brownian motion - the mentioned naive method is not able to capture the hitting properties sufficiently well; on the other hand, as discussed above heat diffusion properties can be far more sensitive than isoperimetric saturation w.r.t. minor geometric perturbations. 5 GENERALIZATION BOUNDS IN TERMS OF HITTING PROBABILITIES Compression, noise stability and generalization Recent advances (Arora et al. (2018); Suzuki et al. (2018; 2020)) indicate that generalization can be related to compression and noise stability. The guiding strategy is: (1) a large DNN f that is stable against (layer-wise) noise injections admits an effective compression to a simpler model f̃ which is almost equivalent to f . Intuitively, the noise stability absorbs the defects introduced by compression; (2) concentration results imply generalization bounds for f̃ . Admittedly, the generalization estimate is obtained initially for the smaller model; however, it is also possible to "transfer" the bound to f (see the discussion at the end of this Section). In this context a simple observation is that Brownian motion and its hitting probabilities can be related, respectively, to noise injection and margins of classification: small hitting probability of the decision boundary should indicate "margin-safety" and allow to compress parameters of the model more aggressively. However, in contrast to injecting normal noise, Brownian motion, with stopping time given by boundary impacts, is more delicate and requires further analysis of the decision boundary. In the following we propose a theoretical framework that, we hope, will augment and produce further insights into the interplay between noise stability and generalization bounds. The statements are inspired by the results in Arora et al. (2018); Suzuki et al. (2020) and we follow the notation therein. First, we propose several options for goodness of approximation (compression) in the sense of heat diffusion (Appendix, Subsection B.1). We give the following definition: Definition 1. Given a positive real number η, a classifier g is said to be an η−compression of f if∣∣ψEg(y)(x, γ2)− ψEf (y)(x, γ2)∣∣ < η (7) for all points x in the training sample, labels y and real numbers γ. Now, as mentioned above we have the following generalization bounds for the compressed model: Proposition 5.1. Let us suppose that f is approximable by g in the sense of Definition 1. Here g ∈ A, where A is a family of classifiers Rn → R parametrized by q parameters assuming r discrete values. For a classifier h, let Ch(x, y, t) be the event that a Brownian path starting at x hits Eh(y) within time t. Then for t1 ≤ t2 ≤ T we have L0(g) ≤ P(x,y)∼D (Cgα(x, y, t1)) ≤ P(x,y)∼X (Cf (x, y, t2)) + η +O (√ q log r m ) (8) with probability at least 1−e−q log r and L0 denoting the expected loss over the true data distribution. Taking t2 → 0 in (8), one recovers the empirical loss L̂0(f) on the RHS. In other words, the generalization of the smaller model g is controlled by hitting probabilities of the initial model f and corrections related to family capacity. The next natural question is the construction of g. Inspired by Johnson-Lindenstrauss techniques (cf. also Arora et al. (2018)) we are able to recover the following statement (thorough details are given in Appendix, Subsections B.5, B.6): Proposition 5.2 (Informal). Considering a fully connected feed-forward neural network f where some flatness conditions on the layer decision boundaries are fulfilled, there exists an η-compression g in the sense of Def. 1 whose number of parameters is logarithmically smaller than f . Finally, having the generalization estimates on the smaller model g it is natural to attempt transferring those to the initial model f - in Suzuki et al. (2020) this is achieved via certain local Rademacher complexity and "peeling" techniques. However, we choose not to pursue these bounds in the present work and assume the perspective in Arora et al. (2018) that g, being almost equivalent to f , provides a reasonable indicator of generalization capabilities. ACKNOWLEDGMENTS We would like to thank our anonymous reviewers whose advice helped improve the quality of the presentation. We are indebted to Prof. Christian Bauckhage for his constant encouragement, support and fruitful discussions. We also sincerely thank Benjamin Wulff for maintaining the outstanding computation environment at Fraunhofer IAIS - his support and coffee conversations played an essential role for our empirical analysis. In part, this work was supported by the Competence Center for Machine Learning Rhine-Ruhr (ML2R) which is funded by the Federal Ministry of Education and Research of Germany (grant no. 01IS18038B). We gratefully acknowledge this support. A APPENDIX A: HITTING ESTIMATES, SATURATION AND CURVATURE A.1 BROWNIAN MOTION AND BESSEL PROCESSES In this Subsection we introduce some basic background on Brownian motion. Definition 2 (Brownian motion). A real-valued stochastic process {ω(t) : t ≥ 0} is called a one-dimensional Brownian motion started at x ∈ R if the following hold: • ω(0) = x, • the process has independent increments, that is, for 0 ≤ t1 ≤ · · · tm the increments ω(tj)− ω(tj−1) for j = 2, · · · ,m are independent random variables, • for t ≥ 0, h > 0, the increments ω(t+ h)− ω(t) are normally distributed with expectation zero and variance h, • almost surely, the function t 7→ ω(t) is continuous. The process {ω(t) : t ≥ 0} is called a standard Brownian motion if x = 0. Finally, if ω1, · · · , ωn are independent one-dimensional Brownian motions started at x1, · · · , xn then the stochastic process ω(t) = (ω1(t), · · · , ωn(t)) is called an n-dimensional Brownian motion started at x = (x1, · · · , xn). Remark A.1. The distribution of the standard 1-dimensional Brownian motion ω(t) is normal with mean 0 and variance t. It follows that the RMSD (root mean squared displacement) of the standard n-dimensional Brownian motion is √ nt. Sampling Brownian motion simulation is prescribed directly by Definition 2. Given a step size s, number of steps k we sample a Brownian path as ω̂(k) := k∑ i=0 sXi, Xi ∼ N(0, 1). (9) By Definition 2, Var[ω(t)] = t, hence the sampling ω̂ corresponds to running a Brownian motion for time t = ks2. (10) In particular, the mean displacement of ω̂ is s √ nk. In accordance with the main text, Subsection 3.1 and Fig. 2, whenever we need to sample Brownian motion contained within the ball B(x, r) for its lifespan [0, t], we will fix the number of steps k (usually, we set k = 400) and adjust the step size s accordingly, so that r = s √ nk. Estimating hitting probabilities A straightforward empirical way to estimate Brownian hitting probability Pω [∃t0 ∈ [0, t]|ω(t0) ∈ S] of a target set S is to evaluate the steps ω̂(i), i = 0, . . . , k and check whether ω̂(i0) ∈ S for some S. Of course, the precision of this computation depends on the number of sampled Brownian paths ω̂, as well as the step size s and number of steps k. Formal statements on convergence and numerical stability could be obtained, e.g. by means of concentration/Monte-Carlo results (e.g. Proposition B.12 below); however, in practice, in our experiments we mostly worked with the regime k ≈ 104 which seemed an acceptable choice in terms of numeric stability and performance. Explicit closed-form computation of hitting probabilities is a non-trivial task, though it is possible for some model cases (main text, Lemma 3.2). Dimension 1 is special, where we have the so-called "reflection principle", which says that P ( sup 0≤s≤t ω(s) ≥ d ) = 2P (ω(t) ≥ d) . (11) For a proof of this basic statement we refer to Mörters & Peres (2010). However, in higher dimensions, there is no straightforward analog of the reflection principle, and calculating hitting probabilities of spheres leads one to the deep theory of Bessel processes. Let us consider a Brownian particle ω(t) starting at the origin in Rn and look at the real-valued random variable ‖ω(t)‖ (in the literature, these are known as Bessel processes). We are interested in the probability of the particle hitting a sphere {x ∈ Rn : ‖x‖ = r} of radius r within time t. Curiously, it seems that there is no known closed formula for such a hitting probability. The only formula we know of is in the form of a convergent series involving zeros of the Bessel function of the first kind, and appears in Kent (1980). For the reader interested in Kent’s formula, we also refer to associated asymptotics of zeros of the Bessel function in Watson (1944). The following heuristic is implicit in many of our calculations and motivates several of our definitions: the probability P ( sup 0≤s≤t ‖ω(s)‖ ≥ r ) (12) of a Brownian particle hitting a sphere of radius r within time t is dependent only the ratio r2/t. As a consequence, given a small η > 0 and a constant c, one can choose the constant cn in t = cnr2 small enough (depending on η) such that P ( sup 0≤s≤cnr2 ‖ω(s)‖ ≥ cr ) < η. (13) Roughly what this means is the following: for a Brownian particle, the probability of hitting even a large and nearby object may be made arbitrarily small if the motion is not allowed to run sufficiently long. A.2 HEAT DIFFUSION AND BROWNIAN MOTION DUALITY Macroscopic vs microscopic There are roughly two broad viewpoints towards the understanding of diffusion: the “macroscopic” and the “microscopic”. Macroscopically, the mechanism of diffusion can be thought of as creating a flux in the direction from greater to lesser concentration. If u(x, t) measures the intensity of the quantity undergoing diffusion, and J the flux across the boundary of a region Ω, then in the simplest model one assumes that (up to a constant) J = −∇u. Further, we have the identity ∂t ∫ Ω u(x, t) dx = − ∫ ∂Ω ν.−∇u dS, (14) where ν is the outward pointing unit normal vector to ∂Ω. By applying the divergence theorem to (14), one immediately gets the heat equation ∂tu = ∆u. Here ∆ denotes the Laplace operator given by the sum of second derivatives: ∆ = ∑n i=1 ∂ 2 ii. Now, many real-life diffusion processes are the result of microscopic particles jittering around seemingly in a random manner. This motivates the microscopic viewpoint, i.e., the modelling of heat diffusion via Brownian motion of particles. We posit that a particle located at x ∈ Rn at time t0 will have the probability ψU (x, t) of being in an open set U ⊂ Rn at time t0 + t, where ψU (x, t) = ∫ U p(t, x, y) dy, (15) and p(t, x, y) is the fundamental solution of the heat equation, or more famously, the “heat kernel”. In other words, p(t, x, y) solves the heat equation{ (∂t −∆)u(x, t) = 0, u(x, 0) = δ(x− y), (16) with the Dirac delta distribution as the initial condition. Via Fourier transform, it is easy to establish that p(t, x, y) is given by p(t, x, y) = 1 (4πt)n/2 e− |x−y|2 4t . (17) This builds the bridge to pass between analytic statements on the side of the heat equation and probabilistic statements on the side of Brownian motion (see Grigor’Yan (2001), Taylor (2011)). The precise formulation of this duality is given by the celebrated Feynman-Kac theorem discussed in Subsection A.3 below. Heating up the decision boundary In our context we introduce the following heat diffusion process along the classifier’s decision boundary N : (∂t −∆)ψ(x, t) = 0, ψ(x, 0) = 0, ∀x ∈ Rn, ψ(x, t)|x∈N = 1, ∀t > 0. (18) In other words ψ(x, t) gives the heat quantity at the point x at time t given that at the initial moment t = 0 all points have a heat quantity 0 and afterwards a constant heat source of intensity 1 is applied only at the decision boundary N . As remarked above this is the macroscopic picture: the mentioned Feynman-Kac duality implies that ψ(x, t) is also the hitting probability Pω [∃t0 ∈ [0, t]|ω(t0) ∈ N ]. A.3 THE FEYNMAN-KAC THEOREM It is well-known that given a reasonable initial condition u(x, 0) = f(x), one can find an analytic solution to the heat equation via convolution with heat kernel, et∆f(x) := p(t, x, .) ∗ f(.). This just follows from (16) by convolving directly. Now, via the duality of diffusion explained above, one expects a parallel statement on the Brownian motion side, one which computes the contribution of all the heat transferred over all Brownian paths reaching a point at time t. It stands to reason that to accomplish this, one needs an integration theory defined over path spaces, which leads us to the theory of Wiener measures. We describe the main idea behind Wiener measure briefly: consider a particle undergoing a random motion in Rn (given by a continuous path ω : [0,∞) → Rn) in the following manner: given t2 > t1 and ω(t1) = x1, the probability density for the location of ω(t2) is p(t, x, x1) = 1 (4π(t2 − t1))n/2 e − |x−x1| 2 4(t2−t1) . We posit that the motion of a random path for t1 ≤ t ≤ t2 is supposed to be independent of its past history. Thus, given 0 < t1 < · · · < tk, and Borel sets Ej ⊆ Rn, the probability that a path starting at x = 0 at t = 0, lies in Ej at time tj is∫ E1 · · · ∫ Ek p(tk − tk−1, xk, xk−1) · · · p(t1, x1, 0) dxk · · · dx1. The aim is to construct a countably-additive measure on the space of continuous paths that will capture the above property. The above heuristic was first put on a rigorous footing by Norbert Wiener. Using the concept of Wiener measure, one gets the probabilistic (microscopic) description of heat diffusion, which is the content of the celebrated Feynman-Kac theorem: Proposition A.2. Let Ω ⊆ Rn be a domain, with or without boundary (it can be the full space Rn). In case of a boundary, we will work with the Laplacian with Dirichlet boundary conditions. Now, let f ∈ L2(Ω). Then for all x ∈ Ω, t > 0, we have that et∆f(x) = Ex (f (ω(t))φΩ(ω, t)) , (19) where ω(t) denotes an element of the probability space of Brownian paths starting at x, Ex is the expectation with regards to the Wiener measure on that probability space, and φΩ(ω, t) = { 1, if ω([0, t]) ⊂ Ω 0, otherwise. For a more detailed discussion, see Georgiev & Mukherjee (2018a). A.4 ISOPERIMETRIC AND ISOCAPACITORY RESULTS Isoperimetric bounds Isoperimetric inequalities relating the volume of a set to the surface area of its boundary have given rise to a wealth of results Burago & Zalgaller (1988). Given a set M with boundary ∂M , the basic pattern of isoperimetric inequalities is: Vol(M) ≤ c1 Area(∂M) n n−1 , (20) where c1 is an appropriate positive constant depending on the dimension n. In many cases, equality (or saturation in the sense of almost equality) in (20) is characterized by rather special geometry. For example, classical isoperimetric results answer the question, which planar set with a given circumference possesses the largest area, with the answer being the disk. As discussed in the main text, isoperimetric considerations have recently lead to significant insights about decision boundaries of classifiers subject to adversarial defense training mechanisms Ford et al. (2019) by revealing flattening phenomena and relations to robustness. Isocapacitory bounds As mentioned in the main text, one can prove types of isocapacitory bounds that resemble the isoperimetric ones: roughly speaking, these replace the area term with suitable Brownian hitting probabilities. We have the following result (cf. also Georgiev & Mukherjee (2018a)): Proposition A.3. Let B(x, r) ⊂ Rn, n ≥ 3, and let E ⊂ B(x, r) denote an “obstacle”, and consider a Brownian particle started from x. Then the relative volume of the obstacle is controlled by the hitting probability of the obstacle: Vol(E) Vol(B(x, r)) ≤ cn (ψE(x, t)) n n−2 . (21) Here, cn is a positive constant whose value is dependent only on n provided the ratio between r2 and t is suitably bounded. In particular, in the regime r2 = nt, we have that cn = ( Γ ( n 2 − 1 ) /Γ ( n 2 − 1, n 4 )) n n−2 . Here, Γ(s, x) represents the upper incomplete Gamma function Γ(s, x) := ∫ ∞ x e−tts−1 dt. Proof. Recall that the capacity (or more formally, the 2-capacity) of a set K ⊂ Rn defined as Cap(K) = inf η|K≡1,η∈C∞c (Rn) ∫ Rn |∇η|2. (22) From Section 2.2.3, Maz’ya (2011), we have the following “isocapacitory inequality”: Cap(E) ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n , (23) where ωn = 2π n/2 Γ(n2 ) is the (n− 1)-dimensional surface area of Sn−1. Now, we bring in the following estimate given by Theorem 3.7 of Grigor’Yan & Saloff-Coste (2002): ψE(x, t) ≥ Cap(E) ∫ t 0 inf y∈∂E p(s, x, y) ds. (24) Now, we have ψE(x, t) ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n ∫ t 0 1 (4πs) n/2 inf y∈∂E e− |x−y|2 4s ds ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n ∫ t 0 1 (4πs) n/2 e− r2 4s ds = ω2/nn n n−2 n (n− 2)|E| n−2 n 1 4rn−2πn/2 ∫ ∞ r2 4t e−zzn/2−2 dz. After rearrangement the proposed claim follows. Intuitively, it makes sense that if the volume of a set is fixed, one can increase its hitting probability by “hammering” the set into a large thin sheet. However, it seems unlikely that after lumping the set together (as in a ball), one can reduce capacity/hitting probability any further. Moreover, isocapacitory bounds are saturated by the n-ball. It is also illustrative to compare the seemingly allied concepts of capacity and surface area. A main difference of capacity with surface area is the interaction of capacity with hitting probabilities. As an illustrative example, think of a book which is open at an angle of 180◦, 90◦, 45◦ respectively. Clearly, all three have the same surface area, but the probability of a Brownian particle striking them goes from the highest to the lowest in the three cases respectively. It is rather difficult to make the heuristic precise in terms of capacity (at least from the definition). Capacity can be thought of as a soft measure of how "spread out" or "opened-up" a surface is, and is highly dependent on how the surface is embedded in the ambient space. Isocapacitory vs isoperimetric saturation A main line of analysis in the present work addresses the interplay between isocapacitory and isoperimetric saturation. In our particular context of defense training mechanisms we observe saturation of isoperimetric bounds for the classifier’s decision boundaries - this implies that decision boundaries are not far from being flat. However, as mentioned before, it turns out that isocapacitory saturation does not concentrate around the values corresponding to hyperplanes (overall, it seems to stay well below that value). In this sense, isocapacitory saturation acts as a finer sensitive measure of deviation from flatness. A simple model geometric scenario that provides similar behaviour is illustrated in Fig. 5 and Fig. 6. A.5 MODEL CASES We first begin with the proof of Lemma 3.2. Proof. Let us select an orthonormal basis {e1, . . . , en} so that e1 coincides with the given hyperplane’s normal vector. A standard fact about n-dimensional Brownian motion is that the projections on the coordinate axes are again one-dimensional Brownian motions Mörters & Peres (2010). Thus, projecting the n-dimensional Brownian motion onto e1 the hitting probability of the hyperplane is the same as the probability that one-dimensional Brownian motion ω(t) will pass a certain threshold d by time t. To compute this probability we use the reflection principle (11) in conjunction with Remark A.1. Consequently, the RHS is equal to 2Φ(−d/ √ t). The computation of µ(x, r) follows by definition. Here we note that the dimension n enters only in terms of the spherical cap volume. An impression how τ behaves for different choices of n in terms of the distance d is given in Fig. 7. In particular, one observes the well-known concentration of measure phenomenon and Levy’s lemma: the volume of the spherical cap exhibits a very rapid decay as n becomes large. Moreover, experiments reveal a curious phenomenon: there is a threshold distance d0 until which τ ≈ 2 and afterwards τ explodes. In Fig. 8 we plot further interesting model cases where the error set forms a wedge (the region between two intersecting hyperplanes) or a cone. Spiky sets As discussed in the main text, one observes a high isocapacitory saturation τ for the so-called "spiky" sets - these are sets of relatively small volume and relatively large/dense boundary. Theoretically, a guiding model case in this direction is given by Lemma 3.3 in the main text, whose proof we now record. Proof. Let Tρ denote the ρ- tubular neighborhood of a line segment of length h inside Rn. Clearly, Tρ ∼= B(0, ρ)× [0, h], where B(0, r) is a ρ-ball inside Rn−1. By the well-known process of Steiner symmetrization in Rn, it is clear that the expression for capacity in (22) will be minimized by a function that is “radially symmetric” around the central axis of the tube Tρ, that is f(x, y) = f(|x|), where x ∈ B(0, ρ), y ∈ [0, h]. Then, as we scale ρ→ λρ, where λ↘ 0, Cap (Tλρ) ∼ λn−3 Cap (Tρ) (which is seen directly from the definition (22)), whereas the volume scales as |Tλρ| = λn−1 |Tρ|. Now assume that the cylinder Tρ is inside the closed ball B(x, r) ⊂ Rn, the central axis of Tρ is pointing towards x, and Tρ is touching the boundary of B(x, r). To pass from capacity to hitting probability of the set Tρ, we use that Grigor’Yan & Saloff-Coste (2002): Cap(Tρ)r 2 Vol(B(x, r)) e−C r2 t ≤ ψTρ(x, t). (25) Finally, using the definition of τ and putting the above estimates together, one sees that in the time regime of O(r2), τ scales like λ−2/(n−2), and hence, τ ↗∞ as λ↘ 0. See also Figure 8 for a visual discussion of the isocapacitory saturation for the model cases of wedges and cones. A.6 CURVATURE ESTIMATES IN TERMS OF ISOCAPACITORY SATURATION The geometric concept of curvature has a rich history and plays a central role in differential geometry and geometric analysis. There are several notions of curvature in the literature, ranging from intrinsic notions like sectional, Ricci or scalar curvatures to extrinsic (that is, dependent on the embedding) notions like principal curvatures and mean curvature, which are encoded in the second fundamental form. In this note we use a somewhat “soft” definition of curvature, following previous work Fawzi et al. (2016); Dezfooli et al. (2018). Suppose the decision boundary Nf is sufficiently regular (C2 is enough for our purpose) and it separates Rn into two components R1 := {f > 0} and R2 := {f < 0}, corresponding to a binary classification (the construction in the multi-label case is analogous). For a given p ∈ Nf , let rj(p) denote the radius of the largest sphere that is tangent to Nf at p, and fully contained inRj . Then, one defines the curvature κ at p as κ(p) = 1/min (r1(p), r2(p)) . (26) See Fig. 10 for a geometric illustration. However, it turns out that most notions of curvature are quite subtle (see Fawzi et al. (2016)) and at this point, seemingly more cumbersome and intractable to handle experimentally. We will take an indirect approach, and attempt to read off the effect of and on curvature via the isocapacitory saturation τ . Again, we begin with the model cases: we first study the behaviour of curvature κ if τ achieves its least possible value. We start by fixing some notation. As before let us consider a ballB(x, r) with an error set E ⊂ B(x, r) and boundary N = ∂E (clearly our main case of interest is E = E(y) ∩B(x, r)). Let us denote the the distance d = d(x,N ) and suppose the point y ∈ N realizes this distance, i.e. d(x, y) = d. To rule out some degenerate cases and ease the analysis we introduce the following assumption: Assumption: The hypersurface N and the point x are on different sides of the tangent hyperplane H∗ := TyN (cf. Fig. 11). This assumption is also technically important, as otherwise low values of τ will be produced by annuli surrounding x. With that in place, we have the following rigidity result: Proposition A.4. Let us fix the distance d = d(x,N ) and suppose the assumption above holds. Then the least possible value of τ is attained only if the curvature κ of the hypersurface N is 0. Proof. As above letH∗ be the tangent hyperplane at distance d from x, and let C denote the (smaller) spherical cap formed by H∗ ∩B(x, r). The proof relies on the following variational argument. If N is not the same as H∗, then N ⊆ C, with y ∈ N ∩H∗. We wish to argue then one can perturb N infinitesimally to decrease the value of τ , so the only minimizer of the above expression has to be H∗. The basic idea is to cut out a small piece pv around v and paste it in the region of around ṽ (Fig. 11). We say that N has positive curvature at some point z if the ball defining the curvature at z and the point x lie on different sides of N . The construction is as follows. Let S(x, s) be the (n− 1)-sphere centered at x with radius s. We consider two cases: Case I: Let us suppose that there exist s1 < s2 ≤ r and points v, ṽ ∈ N such that the curvature of N at v ∈ N ∩ S(x, s1) is greater than the curvature at ṽ ∈ N ∩ S(x, s2). Let us, moreover, choose the infimum among such s1 and the supremum among such s2. To define the mentioned piece pv , we consider two small balls B(v, ε), B(ṽ, ε) (where ε s2 − s1), and cut out a set pv = E ∩ B(v, ε) such that ∂ (E \B(v, ε)) is congruent to N ∩ B(ṽ, ε) (this is possible due to the curvature assumptions at v, ṽ). Then, we define the new error set E′ = E∪pṽ \pv and the boundaryN ′ = ∂E′, where pṽ represents the image of pv under the rigid motion and attached inside B(ṽ, ε) (see Fig. 11). It is now clear that |E| = |E′|, but ψE′(x, T ) < ψE(x, T ) for all T > 0. The last inequality follows from the evaluation of the explicit heat kernel that defines hitting probability ψ as stated by Feynman-Kac duality: ψE(x, T ) = ∫ T 0 ∫ E 1 (4πt)n/2 e− (x−y)2 4t dy dt > ∫ T 0 ∫ E′ 1 (4πt)n/2 e− (x−y)2 4t dy dt = ψE′(x, T ). It follows from the definition of τ that τE ≥ τE′ . Case II: If Case I is not satisfied, then, similarly, we choose two points v, ṽ, but instead of defining the piece pv by intersection with a small ball around v we select pv as a “concavo-convex lens shape” domain, where the curvature on the concave “inner side” of pv of the lens is greater than that on the convex outer side. As before, we attach a rigid motion image of pv inside B(ṽ, ε). The rest of the argument is similar to Case I. With reference to our previous discussion of spikes, it heuristically makes sense that a spike must have reasonably high curvature (it can have high curvature on the average, or if it is flat at most places, then have a sharp needle like end where the curvature is very high). In the same setting as Proposition A.4 let us, moreover, for simplicity assume that N is the graph of a function over the tangent hyperplane H∗ (Fig. 11). Proposition A.5. In the above setting let us fix the value of d. Then, if the maximum curvature κmax of N is sufficiently high (greater than some universal constant), then it satisfies κmax ≥ τ 1 n r ( Φ ( − d√ t ))− 1n−2 , (27) where Φ denotes the c.d.f. of the standard normal distribution. If a point attaining this maximum curvature is within the half concentric ball B(x, r/2), then κmax satisfies the stronger estimate κmax ≥ τ 1 n (r − d) r n n−1 ( Φ ( − d√ t ))− n (n−1)(n−2) . (28) Proof. Recalling the definition of the isocapacitory saturation τ , we will bound the numerator (resp. denominator) of τ from above (resp. below). First, for the numerator ψE(x, t) we will use a basic monotonicity property of hitting probabilities stating that for two sets A ⊆ B one has ψA(x, t) ≤ ψB(x, t) - this follows directly from the definition of ψ. Now, since E ⊆ C where C is the smaller spherical cap of B(x, r) ∩H∗, we have ψE(x, t) ≤ ψC(x, t). However, recalling the explicit form of ψC from Lemma 3.2 of the main text, we have ψE(x, t) ≤ Φ ( − d√ t ) . Second, to bound the denominator of τ (i.e. Vol(E)), we observe that if κmax is large enough, by definition E contains a ball of radius 1κmax , and Vol(E) ≥ ωn κnmax where ωn denotes the volume of unit n-dimensional ball. That finally implies, τ ≤ ( Φ ( − d√ t )) n n−2 Vol(B(x, r)) Vol(E) ≤ ( Φ ( − d√ t )) n n−2 rnκnmax, which proves (27). If a point of maximum curvature is inside a concentric ball of radius r/2, thenE contains≈ κmax(r−d)2 balls of radius 1κmax , which implies that Vol(E) ≥ κmax(r − d) ( ωn κnmax ) . The rest of the proof is similar. Now, we give a curvature estimate which works in any regime, without any restrictions. The tradeoff is a global average bound of the Lp-type rather than pointwise estimates. Proposition A.6. In the setting as above, let us fix the distance d = d(x,N ). At each point of N , let us denote by κ the maximal sectional curvature of N at that point. The following estimate holds: ‖K‖L1 ≥ Vn(d, r)− 2ωnr nΦ ( − d√ t ) τH , (29) where Vn(d, r) denotes the volume of the smaller spherical cap at distance d, the constant ωn denotes the volume of unit ball in Rn, and the function K is an integral function of the curvature κ over lines (defined in (31) below). Proof. Again, we suitably bound the numerator and denominator of τ . Starting with the numerator, as explained in Proposition A.5, we have by monotonicity ψE(x, t) ≤ 2Φ ( − d√ t ) . (30) To bound the denominator of τ we proceed as follows. Let N be the graph of the function g̃(x1, · · · , xn−1), where the variables xj are taken from the hyperplane H∗ (Fig. 11) at distance d from x; the point at which N touches this hyperplane is taken as the origin. Let ϕ be a smooth cut-off function defined on the hyperplane such that ϕ ≡ 1 on the set S of all (x1, · · · , xn−1) such that g̃(x1, · · · , xn−1) ∈ B(x, r), and ϕ ≡ 0 outside the -tubular neighborhood of S. Finally, let g := ϕ g̃. Now we see that, letting a = (r2 − d2)1/2, Vn(d, r)−Vol(E) ≤ ∫ a ρ=0 ∫ Sn−2 g (ρ, θ) ρ n−2 dρ dθ. Now, if η denotes the unit vector in the direction of a fixed (ρ, θ), observing that g (0) = 0, we have by the fundamental theorem of calculus g (ρ, θ) = ∫ 1 0 ∂tg (tρη, θ) dt. In turn, applying the fundamental theorem a second time and observing that ∇g (0) = 0, we have that g (ρ, θ) = ∫ 1 0 ∫ 1 0 ∂s∂tg (stρη, θ) ds dt. Putting everything together we get, Vn(d, r)−Vol(E) ≤ ∫ a ρ=0 ∫ Sn−2 (∫ 1 0 ∫ 1 0 ∂s∂tg (stρη, θ) ds dt ) ρn−2 dρ dθ. Now, we define the following integral quantity: K (ρ, θ) = ∫ 1 0 ∫ 1 0 |κ (stρη, θ)| ds dt. (31) Noting that the maximum sectional curvature bounds the second derivatives, finally we have that Vn(d, r)−Vol(E) ≤ ‖K ‖L1 . (32) To obtain (29) we now put all the above estimates together and let ↘ 0. B APPENDIX B: GENERALIZATION BOUNDS AND COMPRESSION SCHEMES Background A main line of ML and statistical inference research addresses questions of generalization. To set the stage we start with some notation. Let us suppose that the dataset X is sampled from a probability distribution D, i.e. (x, y) ∼ D. Following conventions from the literature Arora et al. (2018) we define the expected margin loss of a classifier f by Lγ(f) := P(x,y)∼D [ f(x)[y] ≤ γ + max j=1,...,k;j 6=y f(x)[j] ] . (33) We use the notation L̂γ to denote the expected empirical margin loss over the given data set X . Finally, the generalization error is defined as Lγ − L̂γ . Quite roughly speaking, standard generalization results attempt to estimate the performance of the classifier on unseen samples (i.e. the full data distribution), thus yielding bounds of the form: Lγ1(f) ≤ L̂γ2(f) + F (γ1, γ2, f,X ), (34) where F is an additional term that usually depends, e.g. on the size of X , the expressiveness of f and further margin information (γ1, γ2). B.1 COMPRESSION IN A HEAT DIFFUSION SENSE IMPLIES GENERALIZATION BOUNDS We first state a well-known concentration inequality due to Hoeffding which will find repeated use in the ensuing sections: Proposition B.1 (Hoeffding’s inequality). Let X1, . . . , Xn be independent random variables taking values in the interval [0, 1], and let X = 1n (X1 + · · ·+Xn) be the empirical mean of these random variables. Then we have: P ( X − E ( X ) ≥ t ) ≤ e−2nt 2 . (35) We now provide the proof of Proposition 5.1 of the main text. Proof. The strategy of proof follows well-known "weak-law-of-large-numbers" concentration techniques in a spirit similar to Arora et al. (2018). Step 1. First, we show that for a given g as |X | → ∞, P(x,y)∼X (Cg(x, y, t1))→ P(x,y)∼D (Cg(x, y, t1)) , (36) where Cg(x, y, γ2) is the event that a Brownian path starting at x hits Eg(y) within time γ2. The rate of convergence is determined through Chernoff concentration bounds. Choose α ∈ A, and let gα be the corresponding classifier. Attached to each sample point xj , there is a Bernoulli random variable Xj which takes the value 1 if Cgα(xj , y, γ 2) happens, and 0 otherwise. Then, the average X = 1m ∑m j=1Xj is given by the average of m i.i.d. Bernoulli random variables each of whose expectations is given by P(x,y)∼D Cgα(x, y, γ2). Furthermore, we note that if a data sample is misclassified, then the Brownian particle almost surely will hit the error set. Combining this observation with the concentration estimate (35) above, we obtain L0(gα) ≤ P(x,y)∼D ( Cgα(x, y, γ 2) ) ≤ P(x,y)∼X ( Cgα(x, y, γ 2) ) + ξ, (37) with probability at least 1− e−2ξ2m. If each classifier gα has q parameters, each of which can take r discrete values, we take ξ = √ q log r m . Step 2. The estimate from the previous step should hold for every classifier gα in the family A with large probability. This is guaranteed by a union bound and tuning the Chernoff bounds from the convergence rate. More precisely, there are rq different choices α ∈ A, and hence by taking the union of the estimate in (37), one can say that P(x,y)∼D ( Cgα(x, y, γ 2) ) ≤ P(x,y)∼X ( Cgα(x, y, γ 2) ) + √ q log r m (38) with probability at least 1− e−q log r over all α ∈ A. Step 3. Finally one uses the fact that f is approximable by at least one g = gα0 for some α0 in A. Via Definition 1 of the main text, one sees that P(x,y)∼X ( Cgα0 (x, y, γ 2) ) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) + η, which finally gives that with probability at least 1− e−q log r, we have L0(g) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) + η +O (√ q log r m ) . (39) Remark B.2. As noted, a classifier f classifies a point x wrongly if and only if ψE(y)(x, t) = 1 for all time scales t. With this observation, and since (39) works for all real numbers γ, letting γ → 0, we have that with probability at least 1− e−q log r, L0(g) ≤ L̂0(f) + η +O (√ q log r m ) . This recovers a loss estimate which is similar to the estimate in Theorem 2.1 of [1]. Indeed, one can consider P(x,y)∼X ( Cf (x, y, γ 2 ) as a “soft” or probabilistic measure of classification with margin ≈ γ. When defining the notion of a compression, instead of taking a pointwise difference as in Definition 1 of Arora et al. (2018), we would like to capture the idea that the decision boundary of a good compression should be “close enough” to the decision boundary of the original classifier. In our context, this implies that their “heat signatures” at the sample points should be close enough at all time scales. As noted in the main text, Definition 1 is definitely one natural option to define goodness of compression in a heat-diffusion sense. Another natural way is to consider the Brownian motion’s running time and define a good approximation as follows: Definition 3. Given a positive real number η, a classifier g is said to be an η−compression w.r.t. hitting time of f if ψEg(y)(x, γ 2 − η) ≤ ψEf (y)(x, γ 2) ≤ ψEg(y)(x, γ 2 + η) (40) for all points x in the training sample, labels y and real numbers γ2 ≥ η. Analogously, we have the following Proposition B.3. Let us suppose that f is approximable by g in the sense of Definition 3. Here g ∈ A, where A is a family of classifiers Rn → R parametrized by q parameters assuming r discrete values. As before, for a classifier h, let Ch(x, y, t) be the event that a Brownian path starting at x hits Eh(y) within time t. Then we have L0(g) ≤ P(x,y)∼D ( Cgα(x, y, γ 2 − η) ) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) +O (√ q log r m ) (41) with probability at least 1− e−q log r. The proof proceeds similarly as above. Letting γ2 → η gives us L0(g) ≤ P(x,y)∼X (Cf (x, y, η)) +O (√ q log r m ) . (42) Again, the first term on the RHS can be interpreted as the geometric margin of classification. In particular, if the classifier f separates points by a distance of≈ √nη, then since the Brownian motion travels ≈ √nη hitting the error set will happen only if a misclassification occurred, i.e. we have P(x,y)∼X (Cf (x, y, η)) ≈ L0(f). (43) B.2 A SHARP VARIANT OF THE JOHNSON-LINDENSTRAUSS ALGORITHM Several state-of-art compression schemes utilize a dimensionality reduction in the spirit of JohnsonLindenstrauss (JL), Arora et al. (2018). In this Subsection we discuss a JL compression scheme that will later be coupled with and tuned by some heat-diffusion estimates. We begin by discussing a variant of JL (Alg. 1). Data: Original matrix A of dimension h1 × h2, β ∈ (0, 1). Result: Stochastic compressed matrix  with O ( log(h1h2)/βα 2 ) non-zero entries such that P [ ‖Âx−Ax‖ ≥ α‖A‖F ‖x‖ ] ≤ β. Start with matrix A, real number α; while i ≤ h1, j ≤ h2 do Let zij = 1 with probability pij = 2a2ij βα2‖A‖2F , 0 otherwise; Let âij = zijaij pij . end Return  = (âij). Algorithm 1: Compressing a matrix A ∈ Rh1×h2 Proposition B.4. Let A be a matrix of dimension h1 × h2. Then, one can find a compressed matrix  such that ‖Ax− Âx‖ ≤ α‖A‖F ‖x‖, with probability at least 1− β, where the number of parameters of  is O ( log(h1h2)/βα 2 ) . A proof of Proposition B.4 in the spirit of classical JL can be provided - however, here we introduce a Bernoulli scheme which is a minor modification of Algorithm 2 of Arora et al. (2018). Proof. Define the random variables zij which take the value 1 with probability pij = 2a2ij βα2‖A‖2F , and the value 0 otherwise. Define âij = zijaij pij . One can now calculate that E (âij) = aij , and Var (âij) ≤ βα2‖A‖2F . Using the above, one can further calculate that E(Âx) = Ax, and Var(Âx) ≤ ‖x‖2‖A‖2Fβα2. By Chebyshev’s inequality, this gives us that P [ ‖Âx−Ax‖ ≥ α‖A‖F ‖x‖ ] ≤ β. Now, the expected number of non-zero entries in  is ∑ i,j pij = 2 βα2 . An application of Chernoff bounds now gives that with high probability the number of non-zero entries is O ( log(h1h2)/βα 2 ) . B.3 HITTING PROBABILITY, CAPACITY SENSITIVITY AND COMPRESSION As discussed in the main text, here we use hitting probabilities associated to the decision boundary to define a concept “capacity sensitivity” of a neural net layer. The heuristic is, the less the capacity sensitivity of a layer, the greater the facility in compressing the layer to one with fewer parameters. This goes in the spirit of current state-of-art results on compression and generalization bounds (Arora et al. (2018), Suzuki et al. (2018), Suzuki et al. (2020)). In particular, in Arora et al. (2018) the authors provide the notions of noise sensitivity and noise cushions motivated by Gaussian noise injections. Our first proposed definition for "heat-diffusion noise cushions" and capacity sensitivity goes as follows: Definition 4. Let η ∼ N be distributed along a noise distribution N concentrated in ball ‖η‖ ≤ η0. We define the capacity sensitivity S(x,Ai; t) of a layer Ai at the point x as S(x,Ai; t) := Eη∼N ∣∣ψEf (φ(Ai(x+ ‖x‖η)), t)− ψEf (φ(Aix), t)∣∣∣∣ψEf (φ(Aix), t)∣∣ . (44) We denote the maximum and expected sensitivity respectively as Sm(Ai; t
1. What is the main idea and contribution of the paper? 2. How does the proposed approach differ from existing methods in terms of decision boundary analysis? 3. Can you explain the concept of heat diffusion and its relation to the decision boundary? 4. How does the Feynman-Kac duality help in understanding the stability of the classifier? 5. What are the strengths and weaknesses of the paper, particularly regarding its technicality and structure? 6. Do you have any concerns or suggestions regarding the experimental results presented in the paper? 7. How does the paper impact the field of machine learning, particularly in terms of introducing new concepts and ideas?
Review
Review Title: HEATING UP DECISION BOUNDARIES: ISOCAPACITORY SATURATION, ADVERSARIAL SCENARIOS AND GENERALIZATION BOUNDS Summary of the paper: The idea of the paper is to introduce a new view on the geometry of the decision boundary of a classifier. Just as we may speak of the "margin" as in large margin methods and the hinge loss, the paper introduces the idea of a heat diffusion from the decision boundary - the amount of heat diffused to the data points gives a more subtle notion of stability. What a creative idea. Even cooler, the authors show that we may utilise the Feynman-Kac duality to now cast this in terms of the probability of a random walk starting at the data to hit the decision boundary. This is appealing because it differentiates between being near to an e.g. long thin decision boundary which approaches from just one side, to being completely surrounded by the decision boundary - something not accounted for by distance to the boundary. Most of the paper is a really nice and gentle discussion of this idea. I gained some appreciation for the work even though it is extremely technical under the hood, and this is due to the nice writing in the main paper. To allow this the author's had to push their main result to the end of the paper (proposition 4.1) and leave the details to the appendix, but this is a fair trade-off in my opinion. It is remarkable that in spite of the technicality involved, the authors manage to obtain proposition 4.1, an impressive first application of the new notions. The appendices are dense, but still nicely written and enjoyable even for the relatively uninitiated such as myself. Pros: This paper stands out as genuinely creative and novel. It is written with an enjoyable style which will surely motivate many theoreticians to delve into the details. I only wish I had the time and talent to do so myself. While primarily theoretical and highly novel, the authors even include some thought provoking experimental results. Cons: One might argue whether the structure of the paper is ideal, but I would counter that the paper is extremely pleasant to read, and that it makes more sense to try to lure the reader into a curious mindset rather than bash them on the head with heavy details from the outset. Nicely done. Recommendation: I strongly recommend to accept this paper. Even if the details - which I have not checked - prove to have issues, the novelty of the work makes it a must-have in the portfolio of ideas included in the upcoming ICLR.
ICLR
Title Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds Abstract In the present work we study classifiers’ decision boundaries via Brownian motion processes in ambient data space and associated probabilistic techniques. Intuitively, our ideas correspond to placing a heat source at the decision boundary and observing how effectively the sample points warm up. We are largely motivated by the search for a soft measure that sheds further light on the decision boundary’s geometry. En route, we bridge aspects of potential theory and geometric analysis (Maz’ya (2011); Grigor’Yan & Saloff-Coste (2002)) with active fields of ML research such as adversarial examples and generalization bounds. First, we focus on the geometric behavior of decision boundaries in the light of adversarial attack/defense mechanisms. Experimentally, we observe a certain capacitory trend over different adversarial defense strategies: decision boundaries locally become flatter as measured by isoperimetric inequalities (Ford et al. (2019)); however, our more sensitive heat-diffusion metrics extend this analysis and further reveal that some non-trivial geometry invisible to plain distance-based methods is still preserved. Intuitively, we provide evidence that the decision boundaries nevertheless retain many persistent "wiggly and fuzzy" regions on a finer scale. Second, we show how Brownian hitting probabilities translate to soft generalization bounds which are in turn connected to compression and noise stability (Arora et al. (2018)), and these bounds are significantly stronger if the decision boundary has controlled geometric features. 1 INTRODUCTION AND BACKGROUND The endeavor to understand certain geometric aspects of decision problems has lead to intense research in statistical learning. These range from the study of data manifolds, through landscapes of loss functions to the delicate analysis of a classifier’s decision boundary. In the present work we focus on the latter. So far, a wealth of studies has analyzed the geometry of decision boundaries of deep neural networks (DNN), reaching profound implications in the fields of adversarial machine learning (adversarial examples), robustness, margin analysis and generalization. Inspired by recent isoperimetric results and curvature estimates (Ford et al. (2019); Moosavi-Dezfooli et al. (2019); Fawzi et al. (2016)), we attempt to provide some new aspects of decision boundary analysis by introducing and studying a corresponding diffusion-inspired approach. In this note the guiding idea is to place a heat source at the classifier’s decision boundary and estimate its size/shape in terms of the amount of heat the boundary is able to emit within a given time (Fig. 1). The goal is to extract geometric information from the behavior of heat transmission. This technique of heat content seems well-known within capacity/potential theory and has led to a variety of results in spectral analysis relating heat diffusion and geometry, Jorgenson & Lang (2001); Grigor’Yan & Saloff-Coste (2002); Maz’ya (2011). However, working with such heat diffusion directly in terms of the corresponding differential equations is impractical. To this end, we note that, due to Feynman-Kac duality, the heat estimates are convertible to Brownian motion hitting probabilities. Thus we circumvent the need for solving intractable differential equations and instead are able to employ a straightforward Monte-Carlo sampling scheme in the ambient data space (Section 3). Background on defense training We apply the above analysis in the context of adversarial machine learning (Section 4) where one studies the interaction between an adversary and a ML system. One of the goals of the subject is to design attack/defense training strategies improving the robustness of a given ML model - in the present work we are interested in how adversarial/noise defense training are reflected geometrically. Many different metrics to estimate robustness have been proposed: on one hand, there is adversarial robustness (the probability that error samples lie very near a given data point x); on the other hand, there is corruption robustness (the probability of getting an error sample after perturbing a given data point x with some specified noise). In our context, heat diffusion naturally suggests a capacitory robustness metric: this metric is built upon the probability that Brownian motion started at a given data point x will hit error samples within a given time window. One can perceive this metric as a combination of adversarial and noise robustness (Brownian motion has continuous paths and specified stopping time determined by boundary impact). In this perspective, our work is aligned with studies of other robustness metrics and curvature results (cf. Fawzi et al. (2016) for a "semi-random" projection robustness and relations to curvature). We study the capacitory metric on the well-known CIFAR10 and MNIST datasets and observe that defense training techniques may either yield a certain (although not substantial) decrease (noise training) or fail to have a significant effect on continuous Brownian attacks overall. Surprisingly, in both cases the studied capacitory metric does not converge to the corresponding value as in the case of a flat decision boundary. Due to our comparison statements and curvature considerations, this means that locally around clean data points the geometry is in general flattened out but may still retain complexity and substantial areas of (small) non-vanishing curvature. In other words, from the point of view of our heat diffusion metrics, decision boundaries locally exhibit non-flat behaviour. Background on generalization estimates Finally, we observe that the collected heat/hittingprobability metrics can further be used to obtain generalization bounds where, in a nutshell, one evaluates the performance of a model on unseen data in terms of the performance over a given sampled data, the model’s expressiveness, dimension, etc. In this regard, we view decision boundary heat diffusion traits as an indicator of how noise-stable a given model is - this relates Brownian hitting bounds with recent compression-based generalization techniques in the spirit of Arora et al. (2018); Suzuki et al. (2018; 2020). More precisely, we proceed in two steps: first, we construct a "smaller" compressed model that is almost equivalent to the initial one in an appropriate heat-theoretic way; second, we obtain generalization estimates for the smaller model in terms of the decision boundary hitting probabilities (computed on the empirical dataset). Furthermore, the bounds are significantly improved under additional geometric assumptions on the decision boundary of the initial model. Additional related work The interplay between heat diffusion and geometry lies at the heart of many topics in geometric analysis and spectral theory (cf. Jorgenson & Lang (2001); Grigor’Yan (2001) for a far reaching overview). Some direct applications of heat diffusion techniques to zero sets of eigenfunctions are seen, for example, in Steinerberger (2014); Georgiev & Mukherjee (2018a;b). The literature on adversarial ML is vast: to name a few central works in the field, we refer to Dalvi et al. (2004); Biggio & Roli (2018); Szegedy et al. (2014). Much effort has been invested in designing and understanding strategies that will render a model robust to various attacks (e.g. Madry et al. (2018); Carlini & Wagner (2017)). In particular, the geometry of decision boundaries has been the focus of many works in the subject leading to breakthroughs in curvature estimates, boundary flatness and robustness, schemes for detecting boundary complexity, proposing adversarial attacks/defenses and diffusion based techniques towards constructing decision boundary from partially pre-labelled data (e.g. Ford et al. (2019); Fawzi et al. (2016; 2017; 2018); Dezfooli et al. (2018); Moosavi-Dezfooli et al. (2019); Karimi et al. (2019); Karimi & Tang (2020); He et al. (2018); Szlam et al. (2008)). The theory of generalization bounds has formed a classical main line of ML and statistical inference research (Vapnik (1999)). In this direction central questions address the generalization properties of heavily over-parametrized deep neural network models. According to some classical VC-dimension results such models should overfit the data and generalize poorly. Extensive research effort has been invested in developing appropriate sharper techniques to explain generalization of DNN models: on one hand there are the methods based on norm estimation whose bounds are not explicitly using the number of the network’s parameters (see Golowich et al. (2019); Neyshabur et al. (2015; 2018); Wei & Ma (2019); Bartlett et al. (2017), etc). On the other hand, recent results based on compression and VC-dimension can lead to sharper bounds (Arora et al. (2018); Suzuki et al. (2018; 2020)). 2 CONTRIBUTIONS, CONTEXT AND PAPER OUTLINE An outline of our essential contributions is given as follows: 1. We analyze decision boundary geometries in terms of novel heat diffusion and Brownian motion techniques with thorough theoretical estimates on curvature and flattening. 2. We show, both theoretically and empirically (in terms of adversarial scenarios on stateof-art DNN models), that the proposed heat diffusion metrics detect the curvature of the boundary; they complement, and in some respects are more sensitive in comparison to previous methods of boundary analysis - intuitively, our heat driven metrics are sharper on a finer scale and can detect small-scale "wiggles and pockets". As an application, we are thus able to provide evidence that adversarial defenses lead to overall flatter boundaries but, surprisingly, the heat traits do not converge to the corresponding flat-case, and hence, finer-scale non-linear characteristics (e.g. "wiggles and pockets") are persistent. 3. Moreover, the preservation of "wiggles and pockets" means that susceptibility to naive Brownian motion attacks is not significantly decreased via adversarial defense mechanisms. 4. Finally, we introduce a novel notion of compression based on heat diffusion and prove that stability of heat signature translates to compression properties and generalization capabilities. In terms of context, the present note is well-aligned with works such as Ford et al. (2019); Dezfooli et al. (2018); Fawzi et al. (2016; 2018). Among other aspects, these works provide substantial analysis of the interplay between geometry/curvature and adversarial robustness/defenses - in particular, we use some of the these tools (e.g. isoperimetric saturation) as benchmarks and sanity checks. However, in contrast, in our work we provide a non-equivalent technique to address decision boundary geometry for which we provide an extensive theoretical and empirical evaluation with insights on the preservation of finer-scale traits. Intuitively, previous distance-based geometric methods could be considered as a "coarser lens", whereas the present heat-diffusion tools appear to be much more sensitive. As a large-scale example, Brownian particles emanating from a point are able to distinguish between a decision boundary which is a hyperplane at distance d and a decision boundary which is a cylinder of radius d wrapping around the point. Our notion of compression is inspired by Arora et al. (2018), and establishes a connection between the Johnson-Lindenstrauss dimension reduction algorithm with diffusion techniques. Furthermore, we bridge the proposed heat-theoretic techniques with generalization bounds in the spirit of Arora et al. (2018); Suzuki et al. (2020). In particular, this shows that overall lower heat quantities at sample points imply better generalization traits. A step-wise road map of the present work is given below: • (Subsection 3.1) We start by discussing what heat diffusion is and how it is to be evaluated - here we discuss that, via Feynman-Kac duality, one can essentially work with Brownian motion hitting probabilities. • (Subsections 3.2 and 3.3) We introduce the isocapacitory saturation τ - a heat-theoretic metric that will be used to estimate boundary flatness. Moreover, here we emphasize the properties of τ such as relations to curvature (Proposition 3.1) and the novel information obtained from heat theoretic methods in comparison to previous distance-based ones. • (Subsection 3.4) We compute τ for certain geometric model cases such as hyperplanes, cones, wedges and "spiky" sets (Lemmas 3.2 and 3.3). This allows us later to evaluate how much a given geometry resembles these model cases. • (Section 4) Next, we are in a position to evaluate and compare τ for decision boundaries of DNNs. We experimentally illustrate the effect of adversarial defense mechanisms and noise robustness on τ (PGD/FGSM on MNIST and CIFAR-10). • (Section 5) We prove that heat transmission relates to generalization bounds (Propositions 5.1 and 5.2) - in particular, lower levels of heat at sample points yield sharper generalization bounds. Finally, we complete the discussion by informally stating our compression scheme. • (Appendix) Our methods leverage several tool sets extensively. For this reason our goal in the main text is to only collect and showcase the techniques and results. However, the thorough in-depth analysis is provided in the Appendix where the reader can find all relevant proofs and further background and references. 3 MOTIVATION AND MAIN IDEAS 3.1 GEOMETRY SEEN THROUGH BROWNIAN MOTION AND DIFFUSION Notation Let us consider a dataset X := {(xi, yi)}mi=1 consisting of feature points xi ∈ Rn and their corresponding labels y ∈ {1, . . . , k}. Let us suppose that a k-label classifier f : Rn → Rk labels a point x ∈ X as arg maxi∈[1,k] f(x)[i]. The decision boundary of f is given by N := {x ∈ Rn|f(x) has two or more equal coordinates} (cf. Fig. 2). Assuming f is sufficiently regular, one thinks of N as a collection of hypersurfaces in Rn. Further, for a given target label y we define the target (error) set E(y) as the set of points on which the classifier’s decision is different from y, i.e. E(y) := {x ∈ Rn| arg maxi∈[1,k] f(x)[i] 6= y} (here we remark that if arg max is set-valued at x with several coordinates obtaining the maximum value, then by convention x is contained in E(y)). Clearly, if a given data sample (x0, y0) ∈ X is correctly classified by f , then x0 is outside of the error set E(y0). Finally, we note that the boundary of E(y) coincides with E(y) ∩N and moreover, N is the union of the boundaries of E(y) for all labels y. Feynman-Kac duality and hitting probabilities As mentioned in Section 1 we wish to study a heat diffusion process where we place a heat source at the decision boundary N : formally, this is given by a heat equation with appropriate initial and boundary conditions (Appendix, Subsection A.2). Avoiding the impracticality of working with the differential equations directly, we bring forward the theorem of Feynman-Kac that relates the solution of the diffusion process to hitting probabilities of Brownian motion (Appendix, Subsection A.3). By way of notation, for an open set U ⊆ Rn, let ψU (x, t) denote the probability that a Brownian particle starting at the point x will enter U within time t. In other words, ψU (x, t) := Pω∼W [∃ t0 ∈ [0, t] | ω(t0) ∈ U ] , x ∈ X , (1) where ω denotes a Brownian motion defined over the interval [0, t] that follows the standard Euclidean Wiener distribution. The amount of heat that a point x receives from N within time t is comparable to the hitting probability that a Brownian particle starting at x will impact the boundary within time t (cf. Fig. 2). Provided that x is correctly classified this is equivalent to the probability of impacting the decision boundary. In general, we evaluate ψE(y)(x, t) (which we often denote by ψ(x, t) by minor abuse of notation) through direct sampling; however, in some model cases, e.g. E(y) being a half-space, a spherical shell or a conical set, ψ(x, t) has a concise closed form (Subsection 3.4 below) that can be evaluated analytically. This allows us to easily measure deviations and compare the heat imprint of N to particular model cases. Local analysis and set-up As mentioned above our analysis is local. For each clean data point x we consider a ball B(x, r) centered at x with radius r and perform all our computations there. In particular, a free Brownian motion starting at x and defined over a maximal time interval [0, t] will on average travel a distance of √ nt (Appendix, Subsection A.1). This suggests to couple r and the maximal Brownian running time t via r = √ nt (cf. Fig. 2), so that, if not stopped by boundary impact, Brownian motion will, on average, reach the sphere ∂B(x, r) by its maximal stopping time. 3.2 AN ISOPERIMETRIC AND ISOCAPACITORY PERSPECTIVE Isoperimetric results Isoperimetric estimates will be the starting baseline (Ford et al. (2019)) to detect low levels of curvature and boundary flatness. For some background in isoperimetric results we refer to (Appendix, Subsection A.4). Let us start by defining the relative error volume µ(x, r) := Vol(E(y) ∩B(x, r)) Vol(B(x, r)) . (2) We recall the so-called Gaussian isoperimetric inequality Borell (1975); Ford et al. (2019): d̃ ≤ −rΦ −1(µ)√ n , µ ≤ 1/2, (3) where Φ−1 denotes the inverse standard normal c.d.f. and where d̃ = d(x̃,Nf ) denotes the median distance with x̃ varying normally and concentrated in the ball B(x, r), and d̃ = 0 if µ ≥ 1/2. Here the isoperimetric result is rigid in the sense that equality in (3) occurs only if E(y) is a half-space. In Ford et al. (2019) the authors demonstrate that defense training mechanisms lead to decision boundaries that saturate this isoperimetric inequality, i.e. in this isoperimetric sense, the decision boundary N becomes locally closer to being a flat hyperplane. We define the ratio between the LHS and RHS in eq. (3) as the isoperimetric saturation. Isocapacitory results In our context of hitting probabilities (eq. (1)), results in potential theory allows us to prove isocapacitory bounds which are similar in spirit to isoperimetric bounds. More precisely one has: µ(x, r) ≤ cn ψ(x, t) n n−2 , (4) where cn is an appropriate constant depending on the dimension n, and r = √ nt. The proof relies on potential theory tools (capacity) and can be found in Appendix, Proposition A.3. Motivated by the above isoperimetric saturation results, one of our main goals is to study how µ compares to ψ(x, t). To this end we define the isocapacitory saturation τ as τ(x, r) := ψ(x, t) n n−2 µ(x, r) . (5) The basic guiding heuristic is that high values of τ indicate that E(y) has a very low volume in comparison to its boundary size and respective heat emission. This is the case whenever E(y) is a very thin region with a well-spread boundary of large surface area - e.g. a set that resembles thin spikes entering the ball B(x, r). In contrast, lower values of τ should indicate a saturation of the isocapacitory inequality (4) and imply that E(y) has a volume that is more comparable to its heat emission - e.g. thicker sets with tamer boundary. To quantify this intuition, we explicitly evaluate τ for some model scenarios (Subsection 3.4). 3.3 THE NOVEL INFORMATION GIVEN BY HEAT DIFFUSION Distances vs. hitting probabilities As discussed above, several works investigate decision boundaries in terms of distance-based analysis (Ford et al. (2019); Fawzi et al. (2016); Karimi & Tang (2020); Karimi et al. (2019)). We remark that our analysis based on hitting probabilities augments and extends the mentioned distance-based approaches. Although related, the two concepts are not equivalent. A guiding example is given by E(y) being a dense collection of "thin needles" (Appendix, Subsections A.4, A.5); in such a scenario the average distance to N is very small, as well as the chance a Brownian particle will hit N . On the other hand, if N is a dense collection of hyperplanes, the average distance toN is again small, but Brownian motions almost surely will hitN . In this sense, evaluating hitting probabilities yields a different perspective than is available from distance-based analysis and sheds further light on the size and shape of the decision boundary, particularly with regards to its capacity and curvature features. Isoperimetric vs. isocapacitory saturation Another demonstration of the additional information obtained through τ is given by almost flat shapes in higher dimensions that saturate isoperimetric bounds (Appendix, Subsection A.4). In these scenarios small geometric deformations can have a significant impact on τ , and at the same time almost preserve isoperimetric bounds. In other words τ provides an additional level of geometric sensitivity. We discuss this further in Section 4. The effect of curvature The interplay between curvature of the decision boundary and robustness has been well studied recently, e.g. Fawzi et al. (2016); Moosavi-Dezfooli et al. (2019) where various forms of robustness (adversarial, semi-random and their ratio) have been estimated in terms of the decision boundary’s curvature. Intuitively, the differential geometric notion of curvature measures how a certain shape is bent. The precise definition of curvature involves taking second-order derivatives which is in most cases impractical. However, in our context we show that the isocapacitory saturation τ implies certain curvature bounds. These statements exploit relations between curvature and volume and lead to pointwise and integral curvature bounds. As an illustration, we have: Proposition 3.1 (Informal). Let (x, y) ∈ X be a data sample. Then, provided that the distance d(x,N ) is kept fixed, larger values of τ locally imply larger pointwise/integral curvature values. A deeper analysis with formal statements and additional details are provided in Appendix, Subsection A.6. The advantages that curvature yields for some types of compression schemes and generalization bounds is also intensely investigated in Appendix, Section B. 3.4 MODEL DECISION BOUNDARIES: HYPERPLANES, WEDGES, CONES AND “SPIKY” SETS Given a certain geometric shape, one is often faced with questions as to how flat or spherical the given geometry is. To this end, a central technique in geometric analysis is comparing to certain model cases - e.g. a sphere, plane, saddle, etc. After having introduced τ and its basic traits we now evaluate it for several model cases (flat hyperplanes, wedges, cones, balls and "spiky" sets). Each of these model cases illustrates a distinguished τ -behaviour: from "tame" behaviour (hyperplanes, balls) to explosion (thin cylinders, "needles and spiky" sets). Hence, having comparisons to these model cases and given an decision boundary, one can, quantify how far away is the given surface from being one of the models. We start by discussing the flat linear case: Lemma 3.2. Let (x, y) be a data sample and suppose that E(y) forms a half-space at a distance d from the given data point x ∈ Rn. Then τ(x, r) = 2 Φ ( − d√ t ) Vol (B(x, r)) Vn(d, r) , (6) where Φ(s) is the c.d.f. for the standard normal distribution, and Vn(d, r) is the volume of the smaller n-dimensional solid spherical cap cut-off at distance d from the center of a ball of radius r. The computation uses standard reflection principle techniques. Figure 3 depicts an experimental discussion on Lemma 3.2. Another illuminating model is given by a "spiky" set - e.g. a thin cylinder, which is in some sense the other extreme. We have Lemma 3.3 (Appendix, Subsection A.5). Suppose that E(y) is a cylinder of height h and radius ρ that enters the ball B(x, r). Then τ ↗∞ as ρ↘ 0. Further comparison results for additional model cases are given in Appendix, Subsection A.5. 4 ADVERSARIAL ATTACKS AND DEFENSES Background and set-up We now analyze how strategies for improving adversarial and noise shift robustness affect the decision boundary’s heat diffusion properties. In particular, we keep track of Brownian hitting probabilities ψ and the isocapacitory saturation τ . On one hand, we can view ψ as a capacitory robustness metric against continuous interpolation attacks given by Brownian noise (see also Section 1). On the other hand, Subsection 3.4 indicates how the behaviour of τ reveals deviation from the case of a flat or "spiky" and curvy decision boundary. Our empirical analysis uses the well-known CIFAR10 and MNIST datasets (details, preprocessing and enhancements are given in Appendix, Subsection C.5). For CIFAR10, we used the Wide-ResNet-28-10 (Zagoruyko & Komodakis (2016); Ford et al. (2019)) and ResNets with 32, 44 and 56 layers (He et al. (2016)). For MNIST, we selected a LeNet-5 and additional CNN architectures. Motivated by previous work (e.g. Ford et al. (2019)), we perform 3 types of training: ordinary stochastic gradient descent (ADAM optimization), training with Gaussian noise data augmentation and training with adversarial defense strategies (FGSM and PGD methods, see also Appendix, Section C.4 for details and remarks on robustness). Detailed outline of the numerics behind Brownian motion sampling, isoperimetric/isocapacitory saturation and relative volume sampling are given in Appendix, Subsection C.3. Analysis of results Recent results (Ford et al. (2019); Schmidt et al. (2017)) have shown qualitative differences between the adversarially robust boundaries of MNIST and CIFAR-10, which also impact the experimental findings in this work. In short, a robust decision boundary is in the MNIST case less spiky in comparison to CIFAR. For more details we refer to Appendix, Subsection C.2. In Fig. 4 we collect the statistics of the WRN and LeNet models on CIFAR10 and MNIST, respectively. On one hand, we confirm previous results (Ford et al. (2019); Fawzi et al. (2016)) implying the "flattening-of-boundary" phenomenon: noisy and adversarial training appear to improve and saturate isoperimetric bounds. Furthermore, the ball B(x, r) realizing relative error volume µ of 1% is on average scaled up for adversarial and, especially, noisy training. On the other hand, an intriguing behaviour is observed for the decision boundary’s heat diffusion traits. The isocapacitory saturation τ does not appear to concentrate around the value corresponding to a flat hyperplane: defense training strategies, both FGSM and PGD-based, may not have a significant impact on the behaviour of τ by forcing it to converge to the case of a flat decision boundary (shown as horizontal red punctured line). Put differently, the chance that a continuous Brownian perturbation will find an adversarial example (scaled to the appropriate ball B(x, r)) will not be significantly altered on average (see Appendix, Subsection C.7 for a visual reference). However, it appears that noisy training consistently delivers lower values of τ - intuitively, this is expected as the decision boundary is adjusted in terms of adding Gaussian "blobs", thus naturally being rounder. Geometrically, the sensitivity of τ to small perturbations in almost flat surfaces (Subsection 3.2) indicates that locally around clean (unperturbed) data points an amount of curvature and more complex geometry are still retained. Of course, this amount is not as large as to violate saturation of isoperimetric bounds and robustness comparability results in the sense of Fawzi et al. (2016). For example, in the case of CIFAR10 a simple geometric model surface that has a similar τ -behaviour (as for the adversarial and noisy training) is given in (Appendix, Subsections A.4, A.5): considering a data point x, an almost flat decision boundary that is concavely bent w.r.t. x with approximate curvature of ≈ 1/(12.3r). These observations reveal finer properties concerning decision boundary flattening due to defense training: in particular, noisy training appears to flatten decision boundaries and slightly bend them concavely w.r.t. to the clean data points. Further results for ResNet models and CNN are provided in (Appendix, Subsection C.7). Spiky sets and control on τ In Fig. 4 large outlying values of τ are filtered out. However, values of τ larger than 10 can occupy up to 1.3% for ordinary training and 2.1%, 2.6% for adversarial, noisy training, respectively. It follows, that the geometry of high-dimensional decision boundaries does not admit too many high-curvature (see also Proposition 3.1) spiky regions of low volume and high heat emission (high surface area) in the sense of Subsections 3.2, 3.4. However, it appears that defense training can increase the number of such spiky regions: one might explain such behaviour by seeing defense training as a bundle of additional geometric conditions that sometimes are not able to agree and thus lead to a more degenerate (singular) geometry. Further, with respect to the initial analysis of Fig. 4, a natural question is whether one can control τ along with the isoperimetric saturation - ultimately, one hopes to design better decision boundaries (flatter, or appropriately curved Moosavi-Dezfooli et al. (2019)) eventually leading to more robustness. However, getting a tight control on τ could be a difficult task. It is, indeed, possible to obtain some basic grip on τ : we trained a LeNet-5 architecture on MNIST that exhibited significantly increased τ values and preserved isoperimetric saturation (statistics are shown as the rightmost boxplot in Fig. 4). Similar to many adversarial defenses, the training consisted in augmenting the dataset with attacks given in this case by Brownian paths. However, it seems difficult to force τ to concentrate around the flat-case value, as well as to obtain competitive robustness of the model. On one hand, this is explained via the need to control heat diffusion through Brownian motion - the mentioned naive method is not able to capture the hitting properties sufficiently well; on the other hand, as discussed above heat diffusion properties can be far more sensitive than isoperimetric saturation w.r.t. minor geometric perturbations. 5 GENERALIZATION BOUNDS IN TERMS OF HITTING PROBABILITIES Compression, noise stability and generalization Recent advances (Arora et al. (2018); Suzuki et al. (2018; 2020)) indicate that generalization can be related to compression and noise stability. The guiding strategy is: (1) a large DNN f that is stable against (layer-wise) noise injections admits an effective compression to a simpler model f̃ which is almost equivalent to f . Intuitively, the noise stability absorbs the defects introduced by compression; (2) concentration results imply generalization bounds for f̃ . Admittedly, the generalization estimate is obtained initially for the smaller model; however, it is also possible to "transfer" the bound to f (see the discussion at the end of this Section). In this context a simple observation is that Brownian motion and its hitting probabilities can be related, respectively, to noise injection and margins of classification: small hitting probability of the decision boundary should indicate "margin-safety" and allow to compress parameters of the model more aggressively. However, in contrast to injecting normal noise, Brownian motion, with stopping time given by boundary impacts, is more delicate and requires further analysis of the decision boundary. In the following we propose a theoretical framework that, we hope, will augment and produce further insights into the interplay between noise stability and generalization bounds. The statements are inspired by the results in Arora et al. (2018); Suzuki et al. (2020) and we follow the notation therein. First, we propose several options for goodness of approximation (compression) in the sense of heat diffusion (Appendix, Subsection B.1). We give the following definition: Definition 1. Given a positive real number η, a classifier g is said to be an η−compression of f if∣∣ψEg(y)(x, γ2)− ψEf (y)(x, γ2)∣∣ < η (7) for all points x in the training sample, labels y and real numbers γ. Now, as mentioned above we have the following generalization bounds for the compressed model: Proposition 5.1. Let us suppose that f is approximable by g in the sense of Definition 1. Here g ∈ A, where A is a family of classifiers Rn → R parametrized by q parameters assuming r discrete values. For a classifier h, let Ch(x, y, t) be the event that a Brownian path starting at x hits Eh(y) within time t. Then for t1 ≤ t2 ≤ T we have L0(g) ≤ P(x,y)∼D (Cgα(x, y, t1)) ≤ P(x,y)∼X (Cf (x, y, t2)) + η +O (√ q log r m ) (8) with probability at least 1−e−q log r and L0 denoting the expected loss over the true data distribution. Taking t2 → 0 in (8), one recovers the empirical loss L̂0(f) on the RHS. In other words, the generalization of the smaller model g is controlled by hitting probabilities of the initial model f and corrections related to family capacity. The next natural question is the construction of g. Inspired by Johnson-Lindenstrauss techniques (cf. also Arora et al. (2018)) we are able to recover the following statement (thorough details are given in Appendix, Subsections B.5, B.6): Proposition 5.2 (Informal). Considering a fully connected feed-forward neural network f where some flatness conditions on the layer decision boundaries are fulfilled, there exists an η-compression g in the sense of Def. 1 whose number of parameters is logarithmically smaller than f . Finally, having the generalization estimates on the smaller model g it is natural to attempt transferring those to the initial model f - in Suzuki et al. (2020) this is achieved via certain local Rademacher complexity and "peeling" techniques. However, we choose not to pursue these bounds in the present work and assume the perspective in Arora et al. (2018) that g, being almost equivalent to f , provides a reasonable indicator of generalization capabilities. ACKNOWLEDGMENTS We would like to thank our anonymous reviewers whose advice helped improve the quality of the presentation. We are indebted to Prof. Christian Bauckhage for his constant encouragement, support and fruitful discussions. We also sincerely thank Benjamin Wulff for maintaining the outstanding computation environment at Fraunhofer IAIS - his support and coffee conversations played an essential role for our empirical analysis. In part, this work was supported by the Competence Center for Machine Learning Rhine-Ruhr (ML2R) which is funded by the Federal Ministry of Education and Research of Germany (grant no. 01IS18038B). We gratefully acknowledge this support. A APPENDIX A: HITTING ESTIMATES, SATURATION AND CURVATURE A.1 BROWNIAN MOTION AND BESSEL PROCESSES In this Subsection we introduce some basic background on Brownian motion. Definition 2 (Brownian motion). A real-valued stochastic process {ω(t) : t ≥ 0} is called a one-dimensional Brownian motion started at x ∈ R if the following hold: • ω(0) = x, • the process has independent increments, that is, for 0 ≤ t1 ≤ · · · tm the increments ω(tj)− ω(tj−1) for j = 2, · · · ,m are independent random variables, • for t ≥ 0, h > 0, the increments ω(t+ h)− ω(t) are normally distributed with expectation zero and variance h, • almost surely, the function t 7→ ω(t) is continuous. The process {ω(t) : t ≥ 0} is called a standard Brownian motion if x = 0. Finally, if ω1, · · · , ωn are independent one-dimensional Brownian motions started at x1, · · · , xn then the stochastic process ω(t) = (ω1(t), · · · , ωn(t)) is called an n-dimensional Brownian motion started at x = (x1, · · · , xn). Remark A.1. The distribution of the standard 1-dimensional Brownian motion ω(t) is normal with mean 0 and variance t. It follows that the RMSD (root mean squared displacement) of the standard n-dimensional Brownian motion is √ nt. Sampling Brownian motion simulation is prescribed directly by Definition 2. Given a step size s, number of steps k we sample a Brownian path as ω̂(k) := k∑ i=0 sXi, Xi ∼ N(0, 1). (9) By Definition 2, Var[ω(t)] = t, hence the sampling ω̂ corresponds to running a Brownian motion for time t = ks2. (10) In particular, the mean displacement of ω̂ is s √ nk. In accordance with the main text, Subsection 3.1 and Fig. 2, whenever we need to sample Brownian motion contained within the ball B(x, r) for its lifespan [0, t], we will fix the number of steps k (usually, we set k = 400) and adjust the step size s accordingly, so that r = s √ nk. Estimating hitting probabilities A straightforward empirical way to estimate Brownian hitting probability Pω [∃t0 ∈ [0, t]|ω(t0) ∈ S] of a target set S is to evaluate the steps ω̂(i), i = 0, . . . , k and check whether ω̂(i0) ∈ S for some S. Of course, the precision of this computation depends on the number of sampled Brownian paths ω̂, as well as the step size s and number of steps k. Formal statements on convergence and numerical stability could be obtained, e.g. by means of concentration/Monte-Carlo results (e.g. Proposition B.12 below); however, in practice, in our experiments we mostly worked with the regime k ≈ 104 which seemed an acceptable choice in terms of numeric stability and performance. Explicit closed-form computation of hitting probabilities is a non-trivial task, though it is possible for some model cases (main text, Lemma 3.2). Dimension 1 is special, where we have the so-called "reflection principle", which says that P ( sup 0≤s≤t ω(s) ≥ d ) = 2P (ω(t) ≥ d) . (11) For a proof of this basic statement we refer to Mörters & Peres (2010). However, in higher dimensions, there is no straightforward analog of the reflection principle, and calculating hitting probabilities of spheres leads one to the deep theory of Bessel processes. Let us consider a Brownian particle ω(t) starting at the origin in Rn and look at the real-valued random variable ‖ω(t)‖ (in the literature, these are known as Bessel processes). We are interested in the probability of the particle hitting a sphere {x ∈ Rn : ‖x‖ = r} of radius r within time t. Curiously, it seems that there is no known closed formula for such a hitting probability. The only formula we know of is in the form of a convergent series involving zeros of the Bessel function of the first kind, and appears in Kent (1980). For the reader interested in Kent’s formula, we also refer to associated asymptotics of zeros of the Bessel function in Watson (1944). The following heuristic is implicit in many of our calculations and motivates several of our definitions: the probability P ( sup 0≤s≤t ‖ω(s)‖ ≥ r ) (12) of a Brownian particle hitting a sphere of radius r within time t is dependent only the ratio r2/t. As a consequence, given a small η > 0 and a constant c, one can choose the constant cn in t = cnr2 small enough (depending on η) such that P ( sup 0≤s≤cnr2 ‖ω(s)‖ ≥ cr ) < η. (13) Roughly what this means is the following: for a Brownian particle, the probability of hitting even a large and nearby object may be made arbitrarily small if the motion is not allowed to run sufficiently long. A.2 HEAT DIFFUSION AND BROWNIAN MOTION DUALITY Macroscopic vs microscopic There are roughly two broad viewpoints towards the understanding of diffusion: the “macroscopic” and the “microscopic”. Macroscopically, the mechanism of diffusion can be thought of as creating a flux in the direction from greater to lesser concentration. If u(x, t) measures the intensity of the quantity undergoing diffusion, and J the flux across the boundary of a region Ω, then in the simplest model one assumes that (up to a constant) J = −∇u. Further, we have the identity ∂t ∫ Ω u(x, t) dx = − ∫ ∂Ω ν.−∇u dS, (14) where ν is the outward pointing unit normal vector to ∂Ω. By applying the divergence theorem to (14), one immediately gets the heat equation ∂tu = ∆u. Here ∆ denotes the Laplace operator given by the sum of second derivatives: ∆ = ∑n i=1 ∂ 2 ii. Now, many real-life diffusion processes are the result of microscopic particles jittering around seemingly in a random manner. This motivates the microscopic viewpoint, i.e., the modelling of heat diffusion via Brownian motion of particles. We posit that a particle located at x ∈ Rn at time t0 will have the probability ψU (x, t) of being in an open set U ⊂ Rn at time t0 + t, where ψU (x, t) = ∫ U p(t, x, y) dy, (15) and p(t, x, y) is the fundamental solution of the heat equation, or more famously, the “heat kernel”. In other words, p(t, x, y) solves the heat equation{ (∂t −∆)u(x, t) = 0, u(x, 0) = δ(x− y), (16) with the Dirac delta distribution as the initial condition. Via Fourier transform, it is easy to establish that p(t, x, y) is given by p(t, x, y) = 1 (4πt)n/2 e− |x−y|2 4t . (17) This builds the bridge to pass between analytic statements on the side of the heat equation and probabilistic statements on the side of Brownian motion (see Grigor’Yan (2001), Taylor (2011)). The precise formulation of this duality is given by the celebrated Feynman-Kac theorem discussed in Subsection A.3 below. Heating up the decision boundary In our context we introduce the following heat diffusion process along the classifier’s decision boundary N : (∂t −∆)ψ(x, t) = 0, ψ(x, 0) = 0, ∀x ∈ Rn, ψ(x, t)|x∈N = 1, ∀t > 0. (18) In other words ψ(x, t) gives the heat quantity at the point x at time t given that at the initial moment t = 0 all points have a heat quantity 0 and afterwards a constant heat source of intensity 1 is applied only at the decision boundary N . As remarked above this is the macroscopic picture: the mentioned Feynman-Kac duality implies that ψ(x, t) is also the hitting probability Pω [∃t0 ∈ [0, t]|ω(t0) ∈ N ]. A.3 THE FEYNMAN-KAC THEOREM It is well-known that given a reasonable initial condition u(x, 0) = f(x), one can find an analytic solution to the heat equation via convolution with heat kernel, et∆f(x) := p(t, x, .) ∗ f(.). This just follows from (16) by convolving directly. Now, via the duality of diffusion explained above, one expects a parallel statement on the Brownian motion side, one which computes the contribution of all the heat transferred over all Brownian paths reaching a point at time t. It stands to reason that to accomplish this, one needs an integration theory defined over path spaces, which leads us to the theory of Wiener measures. We describe the main idea behind Wiener measure briefly: consider a particle undergoing a random motion in Rn (given by a continuous path ω : [0,∞) → Rn) in the following manner: given t2 > t1 and ω(t1) = x1, the probability density for the location of ω(t2) is p(t, x, x1) = 1 (4π(t2 − t1))n/2 e − |x−x1| 2 4(t2−t1) . We posit that the motion of a random path for t1 ≤ t ≤ t2 is supposed to be independent of its past history. Thus, given 0 < t1 < · · · < tk, and Borel sets Ej ⊆ Rn, the probability that a path starting at x = 0 at t = 0, lies in Ej at time tj is∫ E1 · · · ∫ Ek p(tk − tk−1, xk, xk−1) · · · p(t1, x1, 0) dxk · · · dx1. The aim is to construct a countably-additive measure on the space of continuous paths that will capture the above property. The above heuristic was first put on a rigorous footing by Norbert Wiener. Using the concept of Wiener measure, one gets the probabilistic (microscopic) description of heat diffusion, which is the content of the celebrated Feynman-Kac theorem: Proposition A.2. Let Ω ⊆ Rn be a domain, with or without boundary (it can be the full space Rn). In case of a boundary, we will work with the Laplacian with Dirichlet boundary conditions. Now, let f ∈ L2(Ω). Then for all x ∈ Ω, t > 0, we have that et∆f(x) = Ex (f (ω(t))φΩ(ω, t)) , (19) where ω(t) denotes an element of the probability space of Brownian paths starting at x, Ex is the expectation with regards to the Wiener measure on that probability space, and φΩ(ω, t) = { 1, if ω([0, t]) ⊂ Ω 0, otherwise. For a more detailed discussion, see Georgiev & Mukherjee (2018a). A.4 ISOPERIMETRIC AND ISOCAPACITORY RESULTS Isoperimetric bounds Isoperimetric inequalities relating the volume of a set to the surface area of its boundary have given rise to a wealth of results Burago & Zalgaller (1988). Given a set M with boundary ∂M , the basic pattern of isoperimetric inequalities is: Vol(M) ≤ c1 Area(∂M) n n−1 , (20) where c1 is an appropriate positive constant depending on the dimension n. In many cases, equality (or saturation in the sense of almost equality) in (20) is characterized by rather special geometry. For example, classical isoperimetric results answer the question, which planar set with a given circumference possesses the largest area, with the answer being the disk. As discussed in the main text, isoperimetric considerations have recently lead to significant insights about decision boundaries of classifiers subject to adversarial defense training mechanisms Ford et al. (2019) by revealing flattening phenomena and relations to robustness. Isocapacitory bounds As mentioned in the main text, one can prove types of isocapacitory bounds that resemble the isoperimetric ones: roughly speaking, these replace the area term with suitable Brownian hitting probabilities. We have the following result (cf. also Georgiev & Mukherjee (2018a)): Proposition A.3. Let B(x, r) ⊂ Rn, n ≥ 3, and let E ⊂ B(x, r) denote an “obstacle”, and consider a Brownian particle started from x. Then the relative volume of the obstacle is controlled by the hitting probability of the obstacle: Vol(E) Vol(B(x, r)) ≤ cn (ψE(x, t)) n n−2 . (21) Here, cn is a positive constant whose value is dependent only on n provided the ratio between r2 and t is suitably bounded. In particular, in the regime r2 = nt, we have that cn = ( Γ ( n 2 − 1 ) /Γ ( n 2 − 1, n 4 )) n n−2 . Here, Γ(s, x) represents the upper incomplete Gamma function Γ(s, x) := ∫ ∞ x e−tts−1 dt. Proof. Recall that the capacity (or more formally, the 2-capacity) of a set K ⊂ Rn defined as Cap(K) = inf η|K≡1,η∈C∞c (Rn) ∫ Rn |∇η|2. (22) From Section 2.2.3, Maz’ya (2011), we have the following “isocapacitory inequality”: Cap(E) ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n , (23) where ωn = 2π n/2 Γ(n2 ) is the (n− 1)-dimensional surface area of Sn−1. Now, we bring in the following estimate given by Theorem 3.7 of Grigor’Yan & Saloff-Coste (2002): ψE(x, t) ≥ Cap(E) ∫ t 0 inf y∈∂E p(s, x, y) ds. (24) Now, we have ψE(x, t) ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n ∫ t 0 1 (4πs) n/2 inf y∈∂E e− |x−y|2 4s ds ≥ ω2/nn n n−2 n (n− 2)|E| n−2 n ∫ t 0 1 (4πs) n/2 e− r2 4s ds = ω2/nn n n−2 n (n− 2)|E| n−2 n 1 4rn−2πn/2 ∫ ∞ r2 4t e−zzn/2−2 dz. After rearrangement the proposed claim follows. Intuitively, it makes sense that if the volume of a set is fixed, one can increase its hitting probability by “hammering” the set into a large thin sheet. However, it seems unlikely that after lumping the set together (as in a ball), one can reduce capacity/hitting probability any further. Moreover, isocapacitory bounds are saturated by the n-ball. It is also illustrative to compare the seemingly allied concepts of capacity and surface area. A main difference of capacity with surface area is the interaction of capacity with hitting probabilities. As an illustrative example, think of a book which is open at an angle of 180◦, 90◦, 45◦ respectively. Clearly, all three have the same surface area, but the probability of a Brownian particle striking them goes from the highest to the lowest in the three cases respectively. It is rather difficult to make the heuristic precise in terms of capacity (at least from the definition). Capacity can be thought of as a soft measure of how "spread out" or "opened-up" a surface is, and is highly dependent on how the surface is embedded in the ambient space. Isocapacitory vs isoperimetric saturation A main line of analysis in the present work addresses the interplay between isocapacitory and isoperimetric saturation. In our particular context of defense training mechanisms we observe saturation of isoperimetric bounds for the classifier’s decision boundaries - this implies that decision boundaries are not far from being flat. However, as mentioned before, it turns out that isocapacitory saturation does not concentrate around the values corresponding to hyperplanes (overall, it seems to stay well below that value). In this sense, isocapacitory saturation acts as a finer sensitive measure of deviation from flatness. A simple model geometric scenario that provides similar behaviour is illustrated in Fig. 5 and Fig. 6. A.5 MODEL CASES We first begin with the proof of Lemma 3.2. Proof. Let us select an orthonormal basis {e1, . . . , en} so that e1 coincides with the given hyperplane’s normal vector. A standard fact about n-dimensional Brownian motion is that the projections on the coordinate axes are again one-dimensional Brownian motions Mörters & Peres (2010). Thus, projecting the n-dimensional Brownian motion onto e1 the hitting probability of the hyperplane is the same as the probability that one-dimensional Brownian motion ω(t) will pass a certain threshold d by time t. To compute this probability we use the reflection principle (11) in conjunction with Remark A.1. Consequently, the RHS is equal to 2Φ(−d/ √ t). The computation of µ(x, r) follows by definition. Here we note that the dimension n enters only in terms of the spherical cap volume. An impression how τ behaves for different choices of n in terms of the distance d is given in Fig. 7. In particular, one observes the well-known concentration of measure phenomenon and Levy’s lemma: the volume of the spherical cap exhibits a very rapid decay as n becomes large. Moreover, experiments reveal a curious phenomenon: there is a threshold distance d0 until which τ ≈ 2 and afterwards τ explodes. In Fig. 8 we plot further interesting model cases where the error set forms a wedge (the region between two intersecting hyperplanes) or a cone. Spiky sets As discussed in the main text, one observes a high isocapacitory saturation τ for the so-called "spiky" sets - these are sets of relatively small volume and relatively large/dense boundary. Theoretically, a guiding model case in this direction is given by Lemma 3.3 in the main text, whose proof we now record. Proof. Let Tρ denote the ρ- tubular neighborhood of a line segment of length h inside Rn. Clearly, Tρ ∼= B(0, ρ)× [0, h], where B(0, r) is a ρ-ball inside Rn−1. By the well-known process of Steiner symmetrization in Rn, it is clear that the expression for capacity in (22) will be minimized by a function that is “radially symmetric” around the central axis of the tube Tρ, that is f(x, y) = f(|x|), where x ∈ B(0, ρ), y ∈ [0, h]. Then, as we scale ρ→ λρ, where λ↘ 0, Cap (Tλρ) ∼ λn−3 Cap (Tρ) (which is seen directly from the definition (22)), whereas the volume scales as |Tλρ| = λn−1 |Tρ|. Now assume that the cylinder Tρ is inside the closed ball B(x, r) ⊂ Rn, the central axis of Tρ is pointing towards x, and Tρ is touching the boundary of B(x, r). To pass from capacity to hitting probability of the set Tρ, we use that Grigor’Yan & Saloff-Coste (2002): Cap(Tρ)r 2 Vol(B(x, r)) e−C r2 t ≤ ψTρ(x, t). (25) Finally, using the definition of τ and putting the above estimates together, one sees that in the time regime of O(r2), τ scales like λ−2/(n−2), and hence, τ ↗∞ as λ↘ 0. See also Figure 8 for a visual discussion of the isocapacitory saturation for the model cases of wedges and cones. A.6 CURVATURE ESTIMATES IN TERMS OF ISOCAPACITORY SATURATION The geometric concept of curvature has a rich history and plays a central role in differential geometry and geometric analysis. There are several notions of curvature in the literature, ranging from intrinsic notions like sectional, Ricci or scalar curvatures to extrinsic (that is, dependent on the embedding) notions like principal curvatures and mean curvature, which are encoded in the second fundamental form. In this note we use a somewhat “soft” definition of curvature, following previous work Fawzi et al. (2016); Dezfooli et al. (2018). Suppose the decision boundary Nf is sufficiently regular (C2 is enough for our purpose) and it separates Rn into two components R1 := {f > 0} and R2 := {f < 0}, corresponding to a binary classification (the construction in the multi-label case is analogous). For a given p ∈ Nf , let rj(p) denote the radius of the largest sphere that is tangent to Nf at p, and fully contained inRj . Then, one defines the curvature κ at p as κ(p) = 1/min (r1(p), r2(p)) . (26) See Fig. 10 for a geometric illustration. However, it turns out that most notions of curvature are quite subtle (see Fawzi et al. (2016)) and at this point, seemingly more cumbersome and intractable to handle experimentally. We will take an indirect approach, and attempt to read off the effect of and on curvature via the isocapacitory saturation τ . Again, we begin with the model cases: we first study the behaviour of curvature κ if τ achieves its least possible value. We start by fixing some notation. As before let us consider a ballB(x, r) with an error set E ⊂ B(x, r) and boundary N = ∂E (clearly our main case of interest is E = E(y) ∩B(x, r)). Let us denote the the distance d = d(x,N ) and suppose the point y ∈ N realizes this distance, i.e. d(x, y) = d. To rule out some degenerate cases and ease the analysis we introduce the following assumption: Assumption: The hypersurface N and the point x are on different sides of the tangent hyperplane H∗ := TyN (cf. Fig. 11). This assumption is also technically important, as otherwise low values of τ will be produced by annuli surrounding x. With that in place, we have the following rigidity result: Proposition A.4. Let us fix the distance d = d(x,N ) and suppose the assumption above holds. Then the least possible value of τ is attained only if the curvature κ of the hypersurface N is 0. Proof. As above letH∗ be the tangent hyperplane at distance d from x, and let C denote the (smaller) spherical cap formed by H∗ ∩B(x, r). The proof relies on the following variational argument. If N is not the same as H∗, then N ⊆ C, with y ∈ N ∩H∗. We wish to argue then one can perturb N infinitesimally to decrease the value of τ , so the only minimizer of the above expression has to be H∗. The basic idea is to cut out a small piece pv around v and paste it in the region of around ṽ (Fig. 11). We say that N has positive curvature at some point z if the ball defining the curvature at z and the point x lie on different sides of N . The construction is as follows. Let S(x, s) be the (n− 1)-sphere centered at x with radius s. We consider two cases: Case I: Let us suppose that there exist s1 < s2 ≤ r and points v, ṽ ∈ N such that the curvature of N at v ∈ N ∩ S(x, s1) is greater than the curvature at ṽ ∈ N ∩ S(x, s2). Let us, moreover, choose the infimum among such s1 and the supremum among such s2. To define the mentioned piece pv , we consider two small balls B(v, ε), B(ṽ, ε) (where ε s2 − s1), and cut out a set pv = E ∩ B(v, ε) such that ∂ (E \B(v, ε)) is congruent to N ∩ B(ṽ, ε) (this is possible due to the curvature assumptions at v, ṽ). Then, we define the new error set E′ = E∪pṽ \pv and the boundaryN ′ = ∂E′, where pṽ represents the image of pv under the rigid motion and attached inside B(ṽ, ε) (see Fig. 11). It is now clear that |E| = |E′|, but ψE′(x, T ) < ψE(x, T ) for all T > 0. The last inequality follows from the evaluation of the explicit heat kernel that defines hitting probability ψ as stated by Feynman-Kac duality: ψE(x, T ) = ∫ T 0 ∫ E 1 (4πt)n/2 e− (x−y)2 4t dy dt > ∫ T 0 ∫ E′ 1 (4πt)n/2 e− (x−y)2 4t dy dt = ψE′(x, T ). It follows from the definition of τ that τE ≥ τE′ . Case II: If Case I is not satisfied, then, similarly, we choose two points v, ṽ, but instead of defining the piece pv by intersection with a small ball around v we select pv as a “concavo-convex lens shape” domain, where the curvature on the concave “inner side” of pv of the lens is greater than that on the convex outer side. As before, we attach a rigid motion image of pv inside B(ṽ, ε). The rest of the argument is similar to Case I. With reference to our previous discussion of spikes, it heuristically makes sense that a spike must have reasonably high curvature (it can have high curvature on the average, or if it is flat at most places, then have a sharp needle like end where the curvature is very high). In the same setting as Proposition A.4 let us, moreover, for simplicity assume that N is the graph of a function over the tangent hyperplane H∗ (Fig. 11). Proposition A.5. In the above setting let us fix the value of d. Then, if the maximum curvature κmax of N is sufficiently high (greater than some universal constant), then it satisfies κmax ≥ τ 1 n r ( Φ ( − d√ t ))− 1n−2 , (27) where Φ denotes the c.d.f. of the standard normal distribution. If a point attaining this maximum curvature is within the half concentric ball B(x, r/2), then κmax satisfies the stronger estimate κmax ≥ τ 1 n (r − d) r n n−1 ( Φ ( − d√ t ))− n (n−1)(n−2) . (28) Proof. Recalling the definition of the isocapacitory saturation τ , we will bound the numerator (resp. denominator) of τ from above (resp. below). First, for the numerator ψE(x, t) we will use a basic monotonicity property of hitting probabilities stating that for two sets A ⊆ B one has ψA(x, t) ≤ ψB(x, t) - this follows directly from the definition of ψ. Now, since E ⊆ C where C is the smaller spherical cap of B(x, r) ∩H∗, we have ψE(x, t) ≤ ψC(x, t). However, recalling the explicit form of ψC from Lemma 3.2 of the main text, we have ψE(x, t) ≤ Φ ( − d√ t ) . Second, to bound the denominator of τ (i.e. Vol(E)), we observe that if κmax is large enough, by definition E contains a ball of radius 1κmax , and Vol(E) ≥ ωn κnmax where ωn denotes the volume of unit n-dimensional ball. That finally implies, τ ≤ ( Φ ( − d√ t )) n n−2 Vol(B(x, r)) Vol(E) ≤ ( Φ ( − d√ t )) n n−2 rnκnmax, which proves (27). If a point of maximum curvature is inside a concentric ball of radius r/2, thenE contains≈ κmax(r−d)2 balls of radius 1κmax , which implies that Vol(E) ≥ κmax(r − d) ( ωn κnmax ) . The rest of the proof is similar. Now, we give a curvature estimate which works in any regime, without any restrictions. The tradeoff is a global average bound of the Lp-type rather than pointwise estimates. Proposition A.6. In the setting as above, let us fix the distance d = d(x,N ). At each point of N , let us denote by κ the maximal sectional curvature of N at that point. The following estimate holds: ‖K‖L1 ≥ Vn(d, r)− 2ωnr nΦ ( − d√ t ) τH , (29) where Vn(d, r) denotes the volume of the smaller spherical cap at distance d, the constant ωn denotes the volume of unit ball in Rn, and the function K is an integral function of the curvature κ over lines (defined in (31) below). Proof. Again, we suitably bound the numerator and denominator of τ . Starting with the numerator, as explained in Proposition A.5, we have by monotonicity ψE(x, t) ≤ 2Φ ( − d√ t ) . (30) To bound the denominator of τ we proceed as follows. Let N be the graph of the function g̃(x1, · · · , xn−1), where the variables xj are taken from the hyperplane H∗ (Fig. 11) at distance d from x; the point at which N touches this hyperplane is taken as the origin. Let ϕ be a smooth cut-off function defined on the hyperplane such that ϕ ≡ 1 on the set S of all (x1, · · · , xn−1) such that g̃(x1, · · · , xn−1) ∈ B(x, r), and ϕ ≡ 0 outside the -tubular neighborhood of S. Finally, let g := ϕ g̃. Now we see that, letting a = (r2 − d2)1/2, Vn(d, r)−Vol(E) ≤ ∫ a ρ=0 ∫ Sn−2 g (ρ, θ) ρ n−2 dρ dθ. Now, if η denotes the unit vector in the direction of a fixed (ρ, θ), observing that g (0) = 0, we have by the fundamental theorem of calculus g (ρ, θ) = ∫ 1 0 ∂tg (tρη, θ) dt. In turn, applying the fundamental theorem a second time and observing that ∇g (0) = 0, we have that g (ρ, θ) = ∫ 1 0 ∫ 1 0 ∂s∂tg (stρη, θ) ds dt. Putting everything together we get, Vn(d, r)−Vol(E) ≤ ∫ a ρ=0 ∫ Sn−2 (∫ 1 0 ∫ 1 0 ∂s∂tg (stρη, θ) ds dt ) ρn−2 dρ dθ. Now, we define the following integral quantity: K (ρ, θ) = ∫ 1 0 ∫ 1 0 |κ (stρη, θ)| ds dt. (31) Noting that the maximum sectional curvature bounds the second derivatives, finally we have that Vn(d, r)−Vol(E) ≤ ‖K ‖L1 . (32) To obtain (29) we now put all the above estimates together and let ↘ 0. B APPENDIX B: GENERALIZATION BOUNDS AND COMPRESSION SCHEMES Background A main line of ML and statistical inference research addresses questions of generalization. To set the stage we start with some notation. Let us suppose that the dataset X is sampled from a probability distribution D, i.e. (x, y) ∼ D. Following conventions from the literature Arora et al. (2018) we define the expected margin loss of a classifier f by Lγ(f) := P(x,y)∼D [ f(x)[y] ≤ γ + max j=1,...,k;j 6=y f(x)[j] ] . (33) We use the notation L̂γ to denote the expected empirical margin loss over the given data set X . Finally, the generalization error is defined as Lγ − L̂γ . Quite roughly speaking, standard generalization results attempt to estimate the performance of the classifier on unseen samples (i.e. the full data distribution), thus yielding bounds of the form: Lγ1(f) ≤ L̂γ2(f) + F (γ1, γ2, f,X ), (34) where F is an additional term that usually depends, e.g. on the size of X , the expressiveness of f and further margin information (γ1, γ2). B.1 COMPRESSION IN A HEAT DIFFUSION SENSE IMPLIES GENERALIZATION BOUNDS We first state a well-known concentration inequality due to Hoeffding which will find repeated use in the ensuing sections: Proposition B.1 (Hoeffding’s inequality). Let X1, . . . , Xn be independent random variables taking values in the interval [0, 1], and let X = 1n (X1 + · · ·+Xn) be the empirical mean of these random variables. Then we have: P ( X − E ( X ) ≥ t ) ≤ e−2nt 2 . (35) We now provide the proof of Proposition 5.1 of the main text. Proof. The strategy of proof follows well-known "weak-law-of-large-numbers" concentration techniques in a spirit similar to Arora et al. (2018). Step 1. First, we show that for a given g as |X | → ∞, P(x,y)∼X (Cg(x, y, t1))→ P(x,y)∼D (Cg(x, y, t1)) , (36) where Cg(x, y, γ2) is the event that a Brownian path starting at x hits Eg(y) within time γ2. The rate of convergence is determined through Chernoff concentration bounds. Choose α ∈ A, and let gα be the corresponding classifier. Attached to each sample point xj , there is a Bernoulli random variable Xj which takes the value 1 if Cgα(xj , y, γ 2) happens, and 0 otherwise. Then, the average X = 1m ∑m j=1Xj is given by the average of m i.i.d. Bernoulli random variables each of whose expectations is given by P(x,y)∼D Cgα(x, y, γ2). Furthermore, we note that if a data sample is misclassified, then the Brownian particle almost surely will hit the error set. Combining this observation with the concentration estimate (35) above, we obtain L0(gα) ≤ P(x,y)∼D ( Cgα(x, y, γ 2) ) ≤ P(x,y)∼X ( Cgα(x, y, γ 2) ) + ξ, (37) with probability at least 1− e−2ξ2m. If each classifier gα has q parameters, each of which can take r discrete values, we take ξ = √ q log r m . Step 2. The estimate from the previous step should hold for every classifier gα in the family A with large probability. This is guaranteed by a union bound and tuning the Chernoff bounds from the convergence rate. More precisely, there are rq different choices α ∈ A, and hence by taking the union of the estimate in (37), one can say that P(x,y)∼D ( Cgα(x, y, γ 2) ) ≤ P(x,y)∼X ( Cgα(x, y, γ 2) ) + √ q log r m (38) with probability at least 1− e−q log r over all α ∈ A. Step 3. Finally one uses the fact that f is approximable by at least one g = gα0 for some α0 in A. Via Definition 1 of the main text, one sees that P(x,y)∼X ( Cgα0 (x, y, γ 2) ) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) + η, which finally gives that with probability at least 1− e−q log r, we have L0(g) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) + η +O (√ q log r m ) . (39) Remark B.2. As noted, a classifier f classifies a point x wrongly if and only if ψE(y)(x, t) = 1 for all time scales t. With this observation, and since (39) works for all real numbers γ, letting γ → 0, we have that with probability at least 1− e−q log r, L0(g) ≤ L̂0(f) + η +O (√ q log r m ) . This recovers a loss estimate which is similar to the estimate in Theorem 2.1 of [1]. Indeed, one can consider P(x,y)∼X ( Cf (x, y, γ 2 ) as a “soft” or probabilistic measure of classification with margin ≈ γ. When defining the notion of a compression, instead of taking a pointwise difference as in Definition 1 of Arora et al. (2018), we would like to capture the idea that the decision boundary of a good compression should be “close enough” to the decision boundary of the original classifier. In our context, this implies that their “heat signatures” at the sample points should be close enough at all time scales. As noted in the main text, Definition 1 is definitely one natural option to define goodness of compression in a heat-diffusion sense. Another natural way is to consider the Brownian motion’s running time and define a good approximation as follows: Definition 3. Given a positive real number η, a classifier g is said to be an η−compression w.r.t. hitting time of f if ψEg(y)(x, γ 2 − η) ≤ ψEf (y)(x, γ 2) ≤ ψEg(y)(x, γ 2 + η) (40) for all points x in the training sample, labels y and real numbers γ2 ≥ η. Analogously, we have the following Proposition B.3. Let us suppose that f is approximable by g in the sense of Definition 3. Here g ∈ A, where A is a family of classifiers Rn → R parametrized by q parameters assuming r discrete values. As before, for a classifier h, let Ch(x, y, t) be the event that a Brownian path starting at x hits Eh(y) within time t. Then we have L0(g) ≤ P(x,y)∼D ( Cgα(x, y, γ 2 − η) ) ≤ P(x,y)∼X ( Cf (x, y, γ 2) ) +O (√ q log r m ) (41) with probability at least 1− e−q log r. The proof proceeds similarly as above. Letting γ2 → η gives us L0(g) ≤ P(x,y)∼X (Cf (x, y, η)) +O (√ q log r m ) . (42) Again, the first term on the RHS can be interpreted as the geometric margin of classification. In particular, if the classifier f separates points by a distance of≈ √nη, then since the Brownian motion travels ≈ √nη hitting the error set will happen only if a misclassification occurred, i.e. we have P(x,y)∼X (Cf (x, y, η)) ≈ L0(f). (43) B.2 A SHARP VARIANT OF THE JOHNSON-LINDENSTRAUSS ALGORITHM Several state-of-art compression schemes utilize a dimensionality reduction in the spirit of JohnsonLindenstrauss (JL), Arora et al. (2018). In this Subsection we discuss a JL compression scheme that will later be coupled with and tuned by some heat-diffusion estimates. We begin by discussing a variant of JL (Alg. 1). Data: Original matrix A of dimension h1 × h2, β ∈ (0, 1). Result: Stochastic compressed matrix  with O ( log(h1h2)/βα 2 ) non-zero entries such that P [ ‖Âx−Ax‖ ≥ α‖A‖F ‖x‖ ] ≤ β. Start with matrix A, real number α; while i ≤ h1, j ≤ h2 do Let zij = 1 with probability pij = 2a2ij βα2‖A‖2F , 0 otherwise; Let âij = zijaij pij . end Return  = (âij). Algorithm 1: Compressing a matrix A ∈ Rh1×h2 Proposition B.4. Let A be a matrix of dimension h1 × h2. Then, one can find a compressed matrix  such that ‖Ax− Âx‖ ≤ α‖A‖F ‖x‖, with probability at least 1− β, where the number of parameters of  is O ( log(h1h2)/βα 2 ) . A proof of Proposition B.4 in the spirit of classical JL can be provided - however, here we introduce a Bernoulli scheme which is a minor modification of Algorithm 2 of Arora et al. (2018). Proof. Define the random variables zij which take the value 1 with probability pij = 2a2ij βα2‖A‖2F , and the value 0 otherwise. Define âij = zijaij pij . One can now calculate that E (âij) = aij , and Var (âij) ≤ βα2‖A‖2F . Using the above, one can further calculate that E(Âx) = Ax, and Var(Âx) ≤ ‖x‖2‖A‖2Fβα2. By Chebyshev’s inequality, this gives us that P [ ‖Âx−Ax‖ ≥ α‖A‖F ‖x‖ ] ≤ β. Now, the expected number of non-zero entries in  is ∑ i,j pij = 2 βα2 . An application of Chernoff bounds now gives that with high probability the number of non-zero entries is O ( log(h1h2)/βα 2 ) . B.3 HITTING PROBABILITY, CAPACITY SENSITIVITY AND COMPRESSION As discussed in the main text, here we use hitting probabilities associated to the decision boundary to define a concept “capacity sensitivity” of a neural net layer. The heuristic is, the less the capacity sensitivity of a layer, the greater the facility in compressing the layer to one with fewer parameters. This goes in the spirit of current state-of-art results on compression and generalization bounds (Arora et al. (2018), Suzuki et al. (2018), Suzuki et al. (2020)). In particular, in Arora et al. (2018) the authors provide the notions of noise sensitivity and noise cushions motivated by Gaussian noise injections. Our first proposed definition for "heat-diffusion noise cushions" and capacity sensitivity goes as follows: Definition 4. Let η ∼ N be distributed along a noise distribution N concentrated in ball ‖η‖ ≤ η0. We define the capacity sensitivity S(x,Ai; t) of a layer Ai at the point x as S(x,Ai; t) := Eη∼N ∣∣ψEf (φ(Ai(x+ ‖x‖η)), t)− ψEf (φ(Aix), t)∣∣∣∣ψEf (φ(Aix), t)∣∣ . (44) We denote the maximum and expected sensitivity respectively as Sm(Ai; t
1. What is the main contribution of the paper in the field of machine learning? 2. What are the strengths of the proposed approach, particularly in terms of its ability to capture different geometric properties of the decision boundary? 3. What are the weaknesses of the paper, especially regarding its clarity and notation definitions? 4. Do you have any concerns or suggestions regarding the proposed generalization bound? 5. How does the reviewer assess the novelty and potential impact of the paper's ideas and experiments?
Review
Review The paper proposes an isocapacitory measure for analysing decision bound, in a way complementing the isoperimetric analysis proposed by Ford et al. 2019. The authors showed that the new measure captures different geometric properties of the decision boundary, potentially useful for adversarial training. The paper also proposed a new generalisation bound, although did not compare with other generalisation bounds. The paper contains a number of interesting ideas and experiments. In general, the combination of heat diffusion and geometric analysis in the context of machine learning seems an interesting angle worth exploring. The paper in its current form is a bit difficult to read. The key ideas are not clearly separated from the existing work. Some math notations are not clearly defined. In particular, many definitions rely on globally defined constants such as r and t, which has not been made clear to the reader. Below are some specific comments: Equations 2-5 should be clear that r and t are constants in the definitions or inequalities. The assumption r=sqrt(nt) should be made explicit. This affects the definition of c_n in Equation 4. Equation 3 is incorrect. When mu is less than 0.5, the RHS is negative.
ICLR
Title Meta-Reinforcement Learning With Informed Policy Regularization Abstract Meta-reinforcement learning aims at finding a policy able to generalize to new environments. When facing a new environment, this policy must explore to identify its particular characteristics and then exploit this information for collecting reward. We consider the online adaptation setting where the agent needs to trade-off between the two types of behaviour within the same episode. Even though policies based on recurrent neural networks can be used in this setting by training them on multiple environments, they often fail to model this trade-off, or solve it at a very high computational cost. In this paper, we propose a new algorithm that uses privileged information in the form of a task descriptor at train time to improve the learning of recurrent policies. Our method learns an informed policy (i.e., a policy receiving as input the description of the current task) that is used to both construct task embeddings from the descriptors, and to regularize the training of the recurrent policy through parameters sharing and an auxiliary objective. This approach significantly reduces the learning sample complexity without altering the representational power of RNNs, by focusing on the relevant characteristics of the task, and by exploiting them efficiently. We evaluate our algorithm in a variety of environments that require sophisticated exploration/exploitation strategies and show that it outperforms vanilla RNNs, Thompson sampling and the task-inference approaches to meta-reinforcement learning. 1 INTRODUCTION Deep Reinforcement Learning has been used to successfully train agents on a range of challenging environments such as Atari games (Mnih et al., 2013; Bellemare et al., 2013; Hessel et al., 2017) or continuous control (Peng et al., 2017; Schulman et al., 2017). Nonetheless, in these problems, RL agents perform exploration strategies to discover the environment and implement algorithms to learn a policy that is tailored to solving a single task. Whenever the task changes, RL agents generalize poorly and the whole process of exploration and learning restarts from scratch. On the other hand, we expect an intelligent agent to fully master a problem when it is able to generalize from a few instances (tasks) and achieve the objective of the problem under many variations of the environment. For instance, children know how to ride a bike (i.e., the problem) when they can reach their destination irrespective of the specific bike they are riding, which requires to adapt to the weight of the bike, the friction of the brakes and tires, and the road conditions (i.e., the tasks). How to enable agents to generalize across tasks has been studied in Multi-task Reinforcement Learning (e.g. Wilson et al., 2007; Teh et al., 2017), Transfer Learning (e.g. Taylor & Stone, 2011; Lazaric, 2012) and Meta-Reinforcement Learning (Finn et al., 2017; Hausman et al., 2018; Rakelly et al., 2019; Humplik et al., 2019). These works fall into two categories. Learning to learn approaches aim at speeding up learning on new tasks, by pre-training feature extractors or learning good initializations of policy weights (Raghu et al., 2019). In contrast, we study in this paper the online adaptation setting where a single policy is trained for a fixed family of tasks. When facing a new task, the policy must then balance exploration (or probing), to reduce the uncertainty about the current task, and exploitation to maximize the cumulative reward of the task. Agents are evaluated on their ability to manage this trade-off within a single episode of the same task. The online adaptation setting is a special case of a partially observed markov decision problem, where the unobserved variables are the descriptors of the current task. It is thus G1 G2 sign start possible to rely on recurrent neural networks (RNNs) (Bakker, 2001; Heess et al., 2015), since they can theoretically represent optimal policies in POMDPs if given enough capacity. Unfortunately, the training of RNN policies has often prohibitive sample complexity and it may converge to suboptimal local minima. To overcome this drawback, efficient online adaptation methods leverage the knowledge of the task at training time. The main approach is to pair an exploration strategy with the training of informed policies, i.e. policies taking the description of the current task as input. Probe-then-Exploit (PTE) algorithms (e.g. Zhou et al., 2019) operate in two stages. They first rely on an exploration policy to identify the task. Then, they commit to the identified task by playing the associated informed policy. Thompson Sampling (TS) approaches (Thompson, 1933; Osband et al., 2016; 2019) maintain a distribution over plausible tasks and play the informed policy of a task sampled from the posterior following a predefined schedule. PTE and TS are expected to be sample-efficient relatively to RNNs as learning informed policies is a fully observable problem. However, as we discuss in Section 3, PTE and TS cannot represent effective exploration/exploitation policies in many environments. Humplik et al. (2019) proposed an alternative approach, Task Inference (TI), which trains a full RNN policy with the current task prediction as an auxiliary loss. TI avoids the suboptimality of PTE/TS by not constraining the structure of the exploration/exploitation policy. However, in TI, the task descriptors are used as targets and not as inputs, so TI focuses on reconstructing even irrelevant features of the task descriptor and it does not leverage the faster learning of informed policies. In this paper, we introduce IMPORT (InforMed POlicy RegularizaTion), a novel policy architecture for efficient online adaptation that combines the rich expressivity of RNNs with the efficient learning of informed policies. At train time, a shared policy head receives as input the current observation, together with either a (learned) embedding of the current task, or the hidden state of an RNN such that the informed policy and the RNN policy are learned simultaneously. At test time, the hidden state of the RNN replaces the task embedding, and the agent acts without having access to the current task. This leads to several advantages: 1) IMPORT benefits from informed policy to speed up learning; 2) it avoids to reconstruct features of the task descriptor that are irrelevant for learning; and as a consequence, 3) it adapts faster to unknown environments, showing better generalization capabilities. We evaluate IMPORT against the main approaches to online adaptation on environments that require sophisticated exploration/exploitation strategies. We confirm that TS suffers from its limited expressivity, and show that the policy regularization of IMPORT significantly speeds up learning compared to TI. Moreover, the learnt task embeddings of IMPORT make it robust to irrelevant or minimally informative task descriptors, and able to generalize when learning on few training tasks. 2 SETTING LetM be the space of possible tasks. Each µ ∈ M is associated to an episodic µ-MDP Mµ = (S,A, pµ, rµ, γ) whose dynamics pµ and rewards rµ are task dependent, while state and action spaces are shared across tasks and γ is the discount factor. The descriptor µ can be a simple id (µ ∈ N) or a set of parameters (µ ∈ Rd). When the reward function and the transition probabilities are unknown, RL agents need to devise a strategy that balances exploration to gather information about the system and exploitation to maximize the cumulative reward. Such a strategy can be defined as the solution of a partially observable MDP (POMDP), where the hidden variable is the descriptor µ of the MDP. Given a trajectory τt = (s1, a1, r1, . . . , st−1, at−1, rt−1, st), a POMDP policy π(at|τt) maps the trajectory to actions. In particular, the optimal policy in a POMDP is a history-dependent policy that uses τt to construct a belief state bt, which describes the uncertainty about the task at hand, and then maps it to the action that maximizes the expected sum of rewards (e.g. Kaelbling et al., 1998). In this case, maximizing the rewards may require taking explorative actions that improve the belief state enough so that future actions are more effective in collecting reward. The task is sampled at the beginning of an episode from a distribution q(µ). After training, the agent returns a policy π(at|τt) that aims at maximizing the average performance across tasks generated from q, i.e., Eµ∼q(µ) [ |τ |∑ t=1 γt−1rµt ∣∣∣∣π]. (1) where the expectation is taken over a full-episode trajectory τ and task distribution q, and |τ | is the length of the trajectory. The objective is then to find an architecture for π that is able to express strategies that perform the best according to Eq. 1 and, at the same time, can be efficiently learned even for moderately short training phases. At training time, we assume the agent has unrestricted access to the task descriptor µ. Access to such a task descriptor during training is a common assumption in the multi-task literature and captures a large variety of concrete problems. It can be of two types: i) a vector of features corresponding to (physical) parameters of the environment/agent (for instance, such features maybe available in robotics, or when learning on a simulator) (Yu et al., 2018; Mehta et al., 2019; Tobin et al., 2017). ii) It can be a single task identifier (i.e an integer) which is a less restrictive assumption (Choi et al., 2001; Humplik et al., 2019) and corresponds to different concrete problems: learning in a set of M training levels in a video game, learning to control M different robots or learning to interact with M different users. 3 RELATED WORK AND CONTRIBUTIONS In this section, we review how the online adaptation setting has been tackled in the literature. The main approaches are depicted in Fig. 2. We first compare the different methods in terms of expressiveness, and whether they leverage the efficient learning of informed policies. We then discuss learning task embeddings and how the various methods deal with unknown or irrelevant task descriptors. The last subsection summarizes our contributions. Evaluation of RL agent in Meta-Reinforcement Learning. The online adaptation evaluation setting is standard in the Meta-RL literature (Yu et al., 2017; Humplik et al., 2019) but is not the only way to evaluate agents on unseen tasks in the meta-RL literature. Indeed, several works have considered that given a new task, an agent is given an amount of ”free” interactions episodes or steps to perform system identification, then is evaluated on the cumulative reward on one (Bharadhwaj et al., 2019; Rakelly et al., 2019) or several execution episodes (Liu et al., 2020). This is different to what we study here where the agent has to identify the task to solve and solved it within one episode, the reward of the agent being considered during all these steps. Online Adaptation with Deep RL. In the previous section we mentioned that the best strategy corresponds to the optimal policy of the associated POMDP. Since the belief state bt is a sufficient statistic of the history τt, POMDP policies takes the form π(at|τt) = π(at|st, bt). While it is impractical to compute the exact belief state even for toy discrete problems, approximations can be learnt using Recurrent Neural Networks (RNNs) (Bakker, 2001; Heess et al., 2015). RNN-based policies are trained to maximize the cumulative reward and do not leverage task descriptors at train time. While this class of policies can represent rich exploratory strategies, their large training complexity makes them impractical. In order to reduce the training complexity of RNN policies, existing strategies have constrained the set of possible exploratory behaviors by leveraging privileged information about the task. ProbeThen-Exploit (PTE) (e.g. Zhou et al., 2019) works in two phases. First, it executes a pure exploratory policy with the objective of identifying the underlying task µ, i.e. maximizing the likelihood of the task, then runs the optimal policy associated to the estimated task. Both the probing and the informed policies are learned using task descriptors, leading to a much more efficient training process. PTE has two main limitations. First, similarly to explore-then-commit approaches in bandits (e.g. Garivier et al., 2016), the exploration can be suboptimal because it is not reward-driven: valuable time is wasted to estimate unnecessary information. Second, the switch between probing and exploiting is hard to tune and problem-dependent. Thompson Sampling (TS) (Thompson, 1933) leverages randomization to mix exploration and exploitation. Similarly to the belief state of an RNN policy, TS maintains a distribution over task descriptors that represents the uncertainty on the current task given τt. The policy samples a task from the posterior and executes the corresponding informed policy for several steps. Training is limited to learning informed policies together with a maximum likelihood estimator to map trajectories to distributions over tasks. This strategy proved successful in a variety of problems (e.g. Chapelle & Li, 2011; Osband & Roy, 2017). However, as shown in Fig. 1, TS cannot represent certain probing policies because it is constrained to executing informed policies. Another drawback of TS approaches is that the re-sampling frequency needs to be carefully tuned. The Task Inference (TI) approach (Humplik et al., 2019) is a RNN trained to simultaneously learn a good policy and predict the task descriptor µ. Denoting by m : H → Z the mapping from histories to a latent representation of the belief state (Z ⊆ Rd), the policy π(at|zt) selects the action based on the representation zt = m(τt) constructed by the RNN. During training, zt is also used to predict the task descriptor µ, using the task-identification module g : Z →M. The overall objective is: E [ |τ |∑ t=1 γt−1rµt ∣∣∣π]+ βE[ |τ |∑ t=1 `(µ, g(zt)) ∣∣∣π] (2) where `(µ, g(zt)) is the log-likelihood of µ under distribution g(zt). The auxiliary loss is meant to structure the memory of the RNN m rather than be an additional reward for the policy, so training is done by ignoring the effect of m on π when computing the gradient of the auxiliary loss with respect to m. Humplik et al. (2019) proposed two variants, AuxTask and TI, described in Fig. 2 (b) and (c). In TI, the gradient of the policy sub-network is not backpropagated through the RNN (the dashed green arrow in Fig. 2c, and the policy subnetwork receives the original state features as additional input. For both AuxTask and TI, the training of π in TI is purely reward-driven, so they do not suffer from the suboptimality of PTE/TS. However, in contrast to PTE/TS, they do not leverage the smaller sample complexity of training informed policies, and the auxiliary loss is defined over the whole value of µ while only some dimensions may be relevant to solve the task. Learning Task Embeddings While in principle the minimal requirement for the approaches above is access to task identifiers, i.e. one-hot encodings of the task, these approaches are sensitive to the encoding on task descriptions, and prior knowledge on them. In particular, irrelevant variables have a significant impact on PTE approaches since the probing policy aims at identifying the task. For instance, an agent might waste time reconstructing the full µ when only part of µ is needed to act optimally w.r.t the reward. Moreover, TS, TI and AuxTask are guided by a prior distribution over µ that has to be chosen by hand to fit the ground-truth distribution of tasks. Rakelly et al. (2019) proposed to use a factored Gaussian distribution over transitions as a task embedding architecture rather than a RNN. Several approaches have been proposed to learn task embeddings (Gupta et al., 2018; Rakelly et al., 2019; Zintgraf et al., 2019; Hausman et al., 2018). The usual approach is to train embeddings of task identifiers jointly with the policies. Humplik et al. (2019) mentions using TI with task embeddings, but the embeddings are pre-trained separately, which requires either additional interactions with the environment or expert traces. Nonetheless, we show in our experiments that TI can be used with task descriptors, considering task prediction as a multiclass classification problem. Summary of the contributions As for RNN/TI, IMPORT learns an RNN policy to maximize cumulative reward, with no decoupling between probing and exploitation. As such, our approach does not suffer from scheduling difficulties instrinsic to PTE/TS approaches. On the other hand, similarly to PTE/TS and contrarily to RNN/TI, IMPORT leverages the fast training of informed policies through a joint training of an RNN and an informed policy. In addition, IMPORT does not rely on probabilistic models of task descriptors. Learning task embeddings makes the approach robust to irrelevant task descriptors contrary to TI, makes IMPORT applicable when only task identifiers are available and able to better generalize when few training tasks are available.‘ Algorithm 1 IMPORT Training Initialize σ, ω, θ randomly for k = 1, . . . ,K do if k is odd then Collect M transitions following πH Update σ, ω and the parameters of the value function of (A) based on objective (A) + (C) else Collect M transitions following πµ Update σ, θ, ω and the parameters of the value function of (B) based on objective (B) + (C) end if end for 4 METHOD In this section, we describe the main components of the IMPORT model (described in Fig. 2), as well as the online optimization procedure and an additional auxiliary loss to further speed-up learning. Our approach leverages the knowledge of the task descriptor µ and informed policies to construct a latent representation of the task that is purely reward driven. Since µ is unknown at testing time, we use this informed representation to train a predictor based on a recurrent neural network. To leverage the efficiency of informed policies even in this phase, we propose an architecture sharing parameters between the informed policy and the final policy such that the final policy will benefit from parameters learned with privileged information. The idea is to constrain the final policy to stay close to the informed policy while allowing it to perform probing actions when needed to effectively reduce the uncertainty about the task. We call this approach InforMed POlicy RegularizaTion (IMPORT). Formally, we denote by πµ(at|st, µ) and πH(at|τt) the informed policy and the history-dependent (RNN) policy that is used at test time. The informed policy πµ = φ ◦ fµ is the functional composition of fµ and φ, where fµ :M→ Z projects µ in a latent space Z ⊆ Rk and φ : S × Z → A selects the action based on the latent representation. The idea is that fµ(µ) captures the relevant information contained in µ while ignoring dimensions that are not relevant for learning the optimal policy. This behavior is obtained by training πµ directly to maximize the task reward rµ. While πµ leverages the knowledge of µ at training time, πH acts based on the sole history. To encourage πH to behave like the informed policy while preserving the ability to probe, πH and πµ share φ, the mapping from latent representations to actions. We thus define as πH = φ ◦ fH where fH : H → Z encodes the history into the latent space. By sharing the policy head φ, the approximate belief state constructed by the RNN is mapped to the same latent space as µ. When the uncertainty about the task is small, πH then benefits from the joint training with πµ. More precisely, let θ, ω, σ the parameters of φ, fH and fµ respectively, so that πσθµ (at|st, µ) = φθ ◦ fσµ = φθ(at|st, fσµ (µ)) and πωθH (at|τt) = φθ ◦ fωH = φθ(at|st, fωH(τt)). The goal of IMPORT is to maximize over θ, ω, σ the objective function defined in Eq. 3. E [ |τ |∑ t=1 γt−1rµt ∣∣∣∣πω,θH ]︸ ︷︷ ︸ (A) +E [ |τ |∑ t=1 γt−1rµt ∣∣∣∣πσ,θµ ]︸ ︷︷ ︸ (B) −βE [ |τ |∑ t=1 D ( fµ(µ), fH(τt) ) ] ︸ ︷︷ ︸ (C) (3) Speeding Up the Learning. The optimization of (B) in Eq. 3 produces a reward-driven latent representation of the task through fµ. In order to encourage the history-based policy to predict a task embedding close to the one predicted by the informed policy, we augment the objective with an auxiliary loss (C) weighted by β > 0. D is the squared 2-norm in our experiments. Note that because we treat the objective (C) as an auxiliary loss, only the average gradient of D with respect to fH is backpropagated, ignoring the effect of fH on πH . The expectation of (C) is optimized over trajectories generated using πω,θH and π σ,θ µ , respectively used to compute (A) and (B). Optimization. IMPORT is trained using Advantage Actor Critic (A2C) (Mnih et al., 2016) with generalized advantage estimation (GAE) (Schulman et al., 2015). There are two value functions1, one for each objective (A) and (B). The algorithm is summarized in Alg. 1. Each iteration collects a batch of M transitions using either πH or πµ.2 If the batch is sampled according to πH , we update with A2C-GAE the parameters of the policy ω and θ according to both objectives (A) and (C), as well as the parameters of the value function associated to objective (A). If the batch is sampled according to πµ, we update with A2C-GAE the parameters of the policy σ and θ according to both objectives (B) and (C), as well as the parameters of the value function associated to objective (B). 5 EXPERIMENTS We performed experiments on five environments. The CartPole and Acrobot environments from OpenAI Gym (Brockman et al., 2016), where the task descriptor µ represents parameters of the physical system, e.g., the weight of the cart, the size of the pole, etc. The dimension of µ is 5 for Cartpole and 7 for Acrobot. The entries of µ are normalized in [−1, 1] and sampled uniformly. These environments provide basic comparison points where the optimal exploration/exploitation policy is relatively straightforward, since the dynamics can be inferred from a few actions. The Bandit environment is a standard Bernoulli multi-armed bandit problem with K arms. The vector µ ∈ RK denotes the probability of success of the independent Bernoulli distributions. Each dimension of µ is sampled uniformly between 0 and 0.5, the best arm is randomly selected and associated to a probability of 0.9. An episode is 100 arm pulls. At every timestep the agent is allowed to pull an arm 1In our implementation, the value network is shared and takes as an input either fµ(µ) or fH(τt). 2In practice, data collection is multithreaded. We collect 20 transitions per thread with 24 to 64 threads depending on the environment, based on available GPU memory in [1, K] and observes the resulting reward. Although relatively simple, this environment assesses the ability of algorithms to learn nontrivial probing/exploitation strategies. The Tabular MDP environment is a finite MDP with S states and A actions such that the transition matrix is sampled from a flat Dirichlet distribution, and the reward function is sampled from a uniform distribution in [0, 1] as in Duan et al. (2016). In that case, µ is the concatenation of the transition and the reward functions, resulting in a vector of size S2A + SA. This environment is much more challenging as µ is high-dimensional, there is nearly complete uncertainty on the task at hand and each task is a reinforcement learning problem. Finally, the Maze 3D environment is a 3D version of the toy problem depicted in Fig. 1, implemented using gym-miniworld (Chevalier-Boisvert, 2018). It has three discrete actions (forward, left, right) and the objective is to reach one of the two possible goals (see Figure 15 in appendix), resulting in a reward of +1 (resp. −1) when the correct (resp. wrong) goal is reached. The episode terminates when the agent touches a box or after 100 steps. The agent always starts at a random position, with a random orientation. The information about which goal to reach at each episode is encoded by the use of two different textures on the wall located at the opposite side of the maze w.r.t. the goals. This domain allows to evaluate the models when observations are high dimensional (3 × 60 × 60 RGB images). The maximum episode length is 100 on CartPole, Bandit, Tabular-MDP and Maze3D, and 500 on Acrobot. To evaluate the ability of IMPORT and the baselines to deal with different types of task descriptors µ, we also perform experiments on CartPole and Tabular-MDP in the setting where µ is only a task identifier (i.e., a one-hot vector representing the index of the training task) which is a very weak supervision available at train time. We compare to previously discussed baselines. First, a vanilla RNN policy (Heess et al., 2015) using GRUs that never uses µ. Second, we compare to TS, TI and AuxTask, with µ only observed at train time, similarly to IMPORT. For TS, at train time, the policy conditions on the true µ, whereas at test time, the policy conditions on an estimated µ̂ resampled from the posterior every k steps where k ∈ {1, 5, 10, 20}. On bandits, UCB (Auer, 2002) with tuned exploration parameters is our topline. Implementation details Contrarily to IMPORT, TS, TI and AuxTask are based on maximizing the log-likelihood of µ. When using informative task descriptors (i.e. a vector of real values), the log-likelihood uses a Gaussian distribution with learnt mean and diagonal covariance matrix. For the bandit setting, we have also performed experiments using a beta distribution which may be more relevant for this type of problem. When using task identifiers, a multinomial distribution is used. All approaches are trained using A2C with Generalized Advantage Estimation (Mnih et al., 2016; Schulman et al., 2015). The precise values of the hyper-parameters and architectures are given in Appendix B.2. All approaches use similar network architectures with the same number of hidden layers and units. Evaluation The meta-learning scenario is implemented by sampling N training tasks, N validation tasks and 10, 000 test tasks with no overlap between task sets (except in Maze3D where there is only two possible tasks). Each sampled training task is given a unique identifier. Each model is trained on the training tasks, and the best model is selected on the validation tasks. We report the performance on the test tasks, averaged over three trials with different random seeds, corresponding to different sets of train/validation/test tasks. Training uses a discount factor, but for validation and test, we compute the undiscounted cumulative reward on the validation/test tasks. The learning curves show test reward as a function of the environment steps. They are the average of the three curves associated to the best validation model of each of the three seeds used to generate different tasks sets. Overall performances. IMPORT performs better than its competitors in almost all the settings. For instance, on CartPole with 10 tasks (see Table 1), our model reaches 94.4 reward while TI reaches only 91.5. Qualitatively similar results are found on Acrobot (Table 5 in Appendix), as well as on Bandit with 20 arms (Table 3), even though AuxTask performs best with only 10 arms. IMPORT particularly shines when µ encodes complex information, as on Tabular-MDP (see Table 2) where it outperforms all baselines in all settings. By varying the number of training tasks on CartPole and Acrobot, we also show that IMPORT’s advantage over the baselines is larger with fewer training tasks. In all our experiments, as expected, the vanilla RNN performs worse than the other algorithms. Sample Efficiency. Figure 5 shows the convergence curves on CartPole with 10 and 100 training tasks and are representative of what we obtain on other environments (see Appendix). IMPORT tends to converge faster than the baselines. We also observe a positive effect of using the auxiliary loss (β > 0) on sample efficiency, in particular with few training tasks. Note that using the auxiliary loss is particularly efficient in environments where the final policy tends to behave like the informed on. Influence of µ. The experiments with uninformative µ (i.e., task identifiers) reported in Table 1 and 2 for CartPole and Tabular-MDP respectively show that the methods are effective even when the task descriptors do not include any prior knowledge. In the two cases, IMPORT can use these tasks descriptors to generalize well. Moreover, experimental results on CartPole (Fig. 11) and Tabular MDP (Fig. 17) suggest that when µ is a vector of features (and not a task identifier only) , it improves sample efficiency but does not change the final performance. This can be explained by the fact that informed policies are faster to learn with features in µ since, in that case, µ is capturing similarities between tasks. Equivalent performance of IMPORT on both types of task descriptors is observed and shows that our method can deal with different (rich and weak) task descriptors. We further analyze the impact of the encoding of µ on the models, by using non-linear projections of the informative µ to change the shape of the prior knowledge. Figure 5c shows the learning curves of TI and IMPORT on CartPole with task identifiers, the original µ and polynomial expansions of µ of order 2 and 3, resulting in 21 and 56 features. IMPORT’s task embedding approach is robust to the encoding of µ, while TI’s log-likelihood approach underperforms with the polynomial transformation. Task embeddings. To have a qualitative assessment of the task embedding learnt by IMPORT, we consider a bandit problem with 10 arms and embedding dimension 16. Figure 6 shows the clusters of task embeddings obtained with t-SNE (Maaten & Hinton, 2008). Each cluster maps to an optimal arm, showing that IMPORT structures the embedding space based on the relevant information. In K = 10 K = 20 IMPORT 77.5(0.2) 56.6(0.1) AuxTask (Gaussian) 78.7(0.4) 50.5(1.6) AuxTask (Beta) 78.2(0.7) 37.1(0.6) RNN 73.6(0.7) 32.1(1.2) TI (Gaussian) 73.7(1.6) 41.4(2.4) TI (Beta) 79.5(0.1) 53.3(2.4) TS (Gaussian) 50.4(0.4) 38.8(2.0) TS (Beta) 41.3(1.5) 36.3(1.1) UCB 78.5(0.3) 68.2(0.4) Table 3: Bandits performance for K = 10 and K = 20 arms, with N = 100 training tasks. Figure 6: Task embeddings learnt on Bandit (10 arms). Colors indicate the best arm. addition, we have studied the influence of the β hyperparameter from Eq. 3 (in Fig. 4 and Section D). It shows that the auxiliary loss helps to speed-up the learning process, but is not necessary to achieve great performance. High dimensional input space. We show the learning curves on the Maze3D environment in Figure 5d. IMPORT is succeeding in 90% of cases (reward ≈ 0.8), while TI succeeds only in 70% of cases. This shows that IMPORT is even more effective with high-dimensional observations (here, pixels). IMPORT and TI benefit from knowing µ at train time, which allows them to rapidly identify that the wall texture behind the agent is informative, while the vanilla RNN struggles and reaches random goals. TS is not reported since this environment is a typical failure case as discussed in Fig.1. Additional results. In Appendix C.1, we show that IMPORT outperforms TI by a larger margin when the task embedding dimension is small. We also show that IMPORT outperforms its competitors in dynamic environments, i.e., when the task changes during the episode. 6 CONCLUSION We proposed a new policy architecture for meta reinforcement learrning. The IMPORT model is trained only on the reward objective, and leverages the informed policy to discover effective trade-offs between exploration and exploitation. It is thus able to learn better strategies than Thompson Sampling approaches, and faster than recurrent neural network policies and Task Inference approaches. A THE IMPORT ALGORITHM The algorithm is described in details in Algorithm 2. In our implementation, the value function network used for (A) and (B) is the same, i.e. shared. We specialize the input, i.e. for (A) the input will be (st, fH(τt)) and (st, fµ(µt)) for (B). Algorithm 2 Details of IMPORT Training Initialize σ, ω, θ, ν arbitrarily Hyperparameters: Number of iterations K, Number of transitions per update steps M , discount factor γ, GAE parameter γGAE , Adam learning rate η, weighting of the (C) objective β, weighting of the entropy objective λh, weighting of the critic objective λc Optim = Adam(η) for k = 1, . . . ,K do if k is odd then Collect M transitions according to πH in buffer BH . else Collect M transitions according to πµ in buffer Bµ. end if δσ, δω, δθ = 0, 0, 0 Rµ ← compute gae returns(Bµ, γGAE) RH ← compute gae returns(BH , γGAE) δθ,ω += 1 |BH | ∑ b∈BH ∑T t=1[R µ,b t − Vν(sbt , zbt )]∇θ,ω log πH(abt |sbt , zbt ) δθ,ω += λh |BH | ∑ b∈BH ∑T t=1∇θ,ωH ( πH(a b t |sbt , zbt ) ) δω −= 2β|BH | ∑ b∈BH ∑T t=1[f ω H(s b t , z b t )− fµ(sbt , µbt)]∇ωfωH(sbt , zbt ) δν −= 2λc|BH | ∑ b∈BH ∑T t=1[R H,b t − Vν(sbt , zbt )]∇νVν(sbt , zbt ) δθ,σ += 1 |Bµ| ∑ b∈Bµ ∑T t=1[R H,b t − Vν(sbt , µbt)]∇θ,σ log πµ(abt |sbt , µbt) δθ,σ += λh |Bµ| ∑ b∈Bµ ∑T t=1∇θ,σH ( πµ(a b t |sbt , µbt) ) δν −= 2λc|Bµ| ∑ b∈Bµ ∑T t=1[R µ,b t − Vν(sbt , µbt)]∇νVν(sbt , µbt) θ ← Optim(θ, δθ) ω ← Optim(ω, δω) σ ← Optim(σ, δσ) ν ← Optim(ν, δν) end for B IMPLEMENTATION DETAILS B.1 DATA COLLECTION AND OPTIMIZATION We focus on on-policy training for which we use the actor-critic method A2C (Mnih et al., 2016) algorithm with generalized advantage estimation. We use a distributed execution to accelerate experience collection. Several worker processes independently collect trajectories. As workers progress, a shared replay buffer is filled with trajectories and an optimization step happens when the buffer’s capacity bs is reached. After model updates, replay buffer is emptied and the parameters of all workers are updated to guarantee synchronisation. B.2 NETWORK ARCHITECTURES The architecture of the different methods remains the same in all our experiments, except that the number of hidden units changes across considered environments and we consider convolutional neural networks for the Maze3d environment. A description of the architectures of each method is given in Fig. 2. Unless otherwise specified, MLP blocks represent single linear layers activated with a tanh function and their output size is hs. All methods aggregate the trajectory into an embedding zt using a GRU with hidden size hs. Its input is the concatenation of representations of the last action at−1 and current state st obtained separately. Actions are encoded as one-hot vectors. When episodes begin, we initialize the last action with a vector of zeros. For bandits environments, the current state corresponds to the previous reward. TS uses the same GRU architecture to aggregate the history into zt. All methods use a softmax activation to obtain a probability distribution over actions. The use of the hidden-state zt differs across methods. While RNNs only use zt as an input to the policy and critic, both TS and TI map zt to a belief distribution that is problem-specific, e.g. Gaussian for control problems, Beta distribution for bandits, and a multinomial distribution for Maze and CartPole-task environments. For instance, zt is mapped to a Gaussian distribution by using two MLPs whose outputs of size |µ| correspond to the mean and variance. The variance values are mapped to [0, 1] using a sigmoid activation. IMPORT maps zt to an embedding fH , whereas the task embedding fµ is obtained by using a tanh-activated linear mapping of µt. Both embeddings have size hsµ, tuned by cross-validation onto a set of validation tasks. The input of the shared policy head φ is the embedding associated with the policy to use, i.e. either fH when using πH or fµ when using fµ. For the Maze3d experiment and in all methods, we pre-process the pixel input st with three convolutional layers (with output channels 32, stride is 2 and respective kernel sizes are 5, 5 and 4) and LeakyReLU activation. We also use a batch-norm after each convolutional layer. The output is flattened, linearly mapped to a vector of size hs and tanh-activated. C EXPERIMENTS In this section, we explain in deeper details the environments and the set of hyper-parameters we considered. We add learning curves of all experiments to supplement results from Table 1, 2, 3 and 5 in order to study sample efficiency. Task descriptor. Note that for CartPole and Acrobot µ is normalized to be in [−1, 1]D where D is the task descriptor dimension. The task distribution q is always uniform, see the description of the environments for details. For experiments with task identifiers, we associate to each sampled task an integer value corresponding to the order of generation, and encode it usong a one-hot vector. Hyperparameters. Hyperparameter ranges are specified in Table 4. For TS, we consider sampling µ from the posterior dynamics distribution every k steps with k ∈ {1, 5, 10, 20}. C.1 CARTPOLE. We consider the classic CartPole control environment where the environment dynamics change within a set M (|µ| = 5) described by the following physical variables: gravity, cart mass, pole mass, pole length, magnetic force. Their respective pre-normalized domains are [4.8, 14.8], [0.5, 1.5], [0.01, 0.19], [0.2, 0.8], and [−10, 10]. The value of µ are uniformly sampled. Knowing some components of µ might not be required to behave optimally. The discrete action space is {−1, 1}. Episode length is T = 100. Final performance and sample efficiency. Table 1 shows IMPORT’s performance is marginally superior to other methods in most settings. Learning curves in Figure 7 allow analyzing the sample efficiency of the different methods. Overall, IMPORT is more sample efficient than other methods in the privileged information µ setting. Moreover, the use of the auxiliary loss (β > 0) usually speed-up the learning convergence by enforcing the RNN to quickly produce a coherent embedding. We can see that only sharing parameters (β = 0) already helps improving over RNNs. Non-stationary environments. We consider the non-stationary version of CarPole environment where at each timestep, there is a probability ρ = 0.05 to sample a new dynamic µ. Table 8 shows that the performance of IMPORT, AuxTask and TI are comparable in these settings. Size of built embeddings. We now study the impact of the task embedding representation size. As can be seen from Figure 10, IMPORT’s performance remains stable for different representation sizes in {2, 4, 8, 16} whereas TI’s sample efficiency decreases with this dimension. Trajectory and task embeddings. In Figure 11, we plot both the evolution of fH(τt) during an episode of the final model obtained training IMPORT with two-dimensional task embeddings on CartPole with task identifiers (left) and task embedding fµ(µ) learnt by the informed policy (right). As expected, the history embedding gets close to the task embedding after just a few timesteps (left). Interestingly, task embeddings fµ(µ) are able to capture relevant information from the task. For instance, they are highly correlated with the magnetic force which is a very strong factor to “understand” from each new environment to control the system correctly. At the opposite, gravity is less correlated since it does not influence the optimal policy – whatever the gravity is, if the pole is on the left, then you have to go right and vice-versa. Acrobot consists of two joints and two links, where the joint between the two links is actuated. Initially, the links are hanging downwards, and the goal is to swing the end of the lower link up to a given height. Environment dynamics are determined by the length of the two links, their masses, their maximum velocity. Their respective pre-normalized domains are [0.5, 1.5], [0.5, 1.5], [0.5, 1.5], [0.5, 1.5], [3π, 5π] and [7π, 11π]. Unlike CartPole, the environment is stochastic because the simulator applies noise to the applied force. The action space is {−1, 0, 1}. We also add an extra dynamics parameter which controls whether the action order is inverted, i.e. {1, 0,−1}, thus |µ| = 7. Episode length is 500. IMPORT outperforms all baselines in settings with small training task sets (Figure 12 and Table 5) and perform similarly to TI on larger training task sets. C.3 BANDITS The Bandit environment is a standard Bernoulli multi-armed bandit problem with K arms. The vector µ ∈ RK denotes the probability of success of the independent Bernoulli distributions. Each dimension of µ is sampled uniformly between 0 and 0.5, the best arm is randomly selected and associated to a probability of 0.9. Although relatively simple, this environment assesses the ability of algorithms to learn nontrivial exploration/exploitation strategies. Note that it is not surprising that UCB outperforms the other algorithms in this setting. UCB is an optimal algorithm for MAB and we have optimized it for achieving the best empirical performance. Moreover, IMPORT cannot leverage correlations between tasks since, due to the generation process, tasks are independent. We visualize the task embeddings learnt by the informed policy in 13. C.4 MAZE3D ENVIRONMENT The Maze 3D environment (Figure 15) is a continuous maze problem implemented using gymminiworld (Chevalier-Boisvert, 2018), with 3 discrete actions (forward, left, right) where the objective is to reach one of the two possible goals, resulting in a reward of +1 (resp. −1) when the correct (resp. wrong) goal is reached. If a box is touched, the episode ends. The maze’s axis range from -40 to 40, the two turn actions (left, right) modify the angle by 45 degrees, and the forward action is a 5 length move. The agent starts in a random position with a random orientation. The information about which goal to reach at each episode is encoded by the use of two different textures on the wall located on the opposite side of the boxes. In this way, the agent cannot simultaneously observe both boxes and the “informative” wall. This environment allows to evaluate the models in a setting where the observation is a high dimensional space (3x60x60 RGB image). The mapping between the RGB image and the task target in {−1, 1} is challenging and the informed policy should provide better auxiliary task targets than TI thanks to the “easy” training of the informed policy. IMPORT outperforms TI on this environment (Figure 16) in both final performance and sample efficiency. C.5 TABULAR MDPS Tabular MDP (Duan et al., 2016) is a MDP with S discrete states andA actions such that the transition matrix is sampled from a flat Dirichlet distribution, and the reward function is sampled from a uniform distribution in [0, 1]. The task identifier µ is a concatenation of the transition and reward functions resulting in a vector of size S2A+ SA, allowing to test the models with high-dimensional µ. IMPORT outperforms all baselines in all settings (Figure 17 and Table 2). D IMPACT OF THE β HYPERPARAMETER We study the sensibility of the β parameter on IMPORT. Figure 18 clearly shows the benefits of using the auxiliary objective. On all but the Tabular-MDP environments, the recurrent policy successfully leverages the auxiliary objective to improve both sample efficiency and final performance for Acrobot.
1. What is the main contribution of the proposed method in utilizing privileged information? 2. How does the proposed method compare to other task embedding methods, such as Pearl and MANGA? 3. What are the limitations of using an RNN for encoding the task descriptor? 4. How might the choice of embedding architecture impact the training process? 5. Are there any concerns regarding the consistency between the paper's claims and the experimental results?
Review
Review Summary When the task descriptor is available as the privileged information, the authors propose a novel method to learn the policy that can benefit from privileged information. It is reward-driven learning and yet can make use of privileged information for efficient exploration. The advantage of the proposed method is verified in the experiments. Comments on the paper I think the authors show an advantage of the proposed method by some experiments, but I‘d like to further request the following things to make the paper more convincing. Because the proposed method needs the task descriptor, it would be good to explain what kind of tasks we can apply the proposed method. The wider the applicability of the proposed method is, more valuable the proposed method would be. In the experiments, the authors compare with TS, TI and AuxTask. But I would like to see the comparison with another task embedding method, such as Pearl, K. Rakelly, et al., "Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables," in ICML, 2019. I guess RNN makes the training of the exploitation policy more difficult because its latent code dynamically changes especially when the state space is large such as Maze3D environment. On the other hand, the task descriptor would not change. So RNN may sometimes makes the training difficult. We can use other embedding architecture for f H such as used in Pearl. I also would like to note that when the parameter of the dynamics is used as the task descriptor, it becomes similar to Homanga Bharadhwaj et al., “MANGA: Method Agnostic Neural-policy Generalization and Adaptation”, in ICRA 2020. Update Thank you for the comments. But there is a misunderstanding. MANGA as well as PEARL are online methods. They just need the observed data during the episode. It can encode the observation data in online manner. I think it is not evident whether IMPORT performs better than MANGA or PERAL. I agree that RNN is general, but on the other hand, I am afraid that the internal state of RNN does not converge and usually fluctuate from time to time. It may be difficult to get a persistent policy during the episode in the same environment. I would like to encourage the authors to perform more convincing experiment and make the claim of the paper consistent with the experimental findings.
ICLR
Title Meta-Reinforcement Learning With Informed Policy Regularization Abstract Meta-reinforcement learning aims at finding a policy able to generalize to new environments. When facing a new environment, this policy must explore to identify its particular characteristics and then exploit this information for collecting reward. We consider the online adaptation setting where the agent needs to trade-off between the two types of behaviour within the same episode. Even though policies based on recurrent neural networks can be used in this setting by training them on multiple environments, they often fail to model this trade-off, or solve it at a very high computational cost. In this paper, we propose a new algorithm that uses privileged information in the form of a task descriptor at train time to improve the learning of recurrent policies. Our method learns an informed policy (i.e., a policy receiving as input the description of the current task) that is used to both construct task embeddings from the descriptors, and to regularize the training of the recurrent policy through parameters sharing and an auxiliary objective. This approach significantly reduces the learning sample complexity without altering the representational power of RNNs, by focusing on the relevant characteristics of the task, and by exploiting them efficiently. We evaluate our algorithm in a variety of environments that require sophisticated exploration/exploitation strategies and show that it outperforms vanilla RNNs, Thompson sampling and the task-inference approaches to meta-reinforcement learning. 1 INTRODUCTION Deep Reinforcement Learning has been used to successfully train agents on a range of challenging environments such as Atari games (Mnih et al., 2013; Bellemare et al., 2013; Hessel et al., 2017) or continuous control (Peng et al., 2017; Schulman et al., 2017). Nonetheless, in these problems, RL agents perform exploration strategies to discover the environment and implement algorithms to learn a policy that is tailored to solving a single task. Whenever the task changes, RL agents generalize poorly and the whole process of exploration and learning restarts from scratch. On the other hand, we expect an intelligent agent to fully master a problem when it is able to generalize from a few instances (tasks) and achieve the objective of the problem under many variations of the environment. For instance, children know how to ride a bike (i.e., the problem) when they can reach their destination irrespective of the specific bike they are riding, which requires to adapt to the weight of the bike, the friction of the brakes and tires, and the road conditions (i.e., the tasks). How to enable agents to generalize across tasks has been studied in Multi-task Reinforcement Learning (e.g. Wilson et al., 2007; Teh et al., 2017), Transfer Learning (e.g. Taylor & Stone, 2011; Lazaric, 2012) and Meta-Reinforcement Learning (Finn et al., 2017; Hausman et al., 2018; Rakelly et al., 2019; Humplik et al., 2019). These works fall into two categories. Learning to learn approaches aim at speeding up learning on new tasks, by pre-training feature extractors or learning good initializations of policy weights (Raghu et al., 2019). In contrast, we study in this paper the online adaptation setting where a single policy is trained for a fixed family of tasks. When facing a new task, the policy must then balance exploration (or probing), to reduce the uncertainty about the current task, and exploitation to maximize the cumulative reward of the task. Agents are evaluated on their ability to manage this trade-off within a single episode of the same task. The online adaptation setting is a special case of a partially observed markov decision problem, where the unobserved variables are the descriptors of the current task. It is thus G1 G2 sign start possible to rely on recurrent neural networks (RNNs) (Bakker, 2001; Heess et al., 2015), since they can theoretically represent optimal policies in POMDPs if given enough capacity. Unfortunately, the training of RNN policies has often prohibitive sample complexity and it may converge to suboptimal local minima. To overcome this drawback, efficient online adaptation methods leverage the knowledge of the task at training time. The main approach is to pair an exploration strategy with the training of informed policies, i.e. policies taking the description of the current task as input. Probe-then-Exploit (PTE) algorithms (e.g. Zhou et al., 2019) operate in two stages. They first rely on an exploration policy to identify the task. Then, they commit to the identified task by playing the associated informed policy. Thompson Sampling (TS) approaches (Thompson, 1933; Osband et al., 2016; 2019) maintain a distribution over plausible tasks and play the informed policy of a task sampled from the posterior following a predefined schedule. PTE and TS are expected to be sample-efficient relatively to RNNs as learning informed policies is a fully observable problem. However, as we discuss in Section 3, PTE and TS cannot represent effective exploration/exploitation policies in many environments. Humplik et al. (2019) proposed an alternative approach, Task Inference (TI), which trains a full RNN policy with the current task prediction as an auxiliary loss. TI avoids the suboptimality of PTE/TS by not constraining the structure of the exploration/exploitation policy. However, in TI, the task descriptors are used as targets and not as inputs, so TI focuses on reconstructing even irrelevant features of the task descriptor and it does not leverage the faster learning of informed policies. In this paper, we introduce IMPORT (InforMed POlicy RegularizaTion), a novel policy architecture for efficient online adaptation that combines the rich expressivity of RNNs with the efficient learning of informed policies. At train time, a shared policy head receives as input the current observation, together with either a (learned) embedding of the current task, or the hidden state of an RNN such that the informed policy and the RNN policy are learned simultaneously. At test time, the hidden state of the RNN replaces the task embedding, and the agent acts without having access to the current task. This leads to several advantages: 1) IMPORT benefits from informed policy to speed up learning; 2) it avoids to reconstruct features of the task descriptor that are irrelevant for learning; and as a consequence, 3) it adapts faster to unknown environments, showing better generalization capabilities. We evaluate IMPORT against the main approaches to online adaptation on environments that require sophisticated exploration/exploitation strategies. We confirm that TS suffers from its limited expressivity, and show that the policy regularization of IMPORT significantly speeds up learning compared to TI. Moreover, the learnt task embeddings of IMPORT make it robust to irrelevant or minimally informative task descriptors, and able to generalize when learning on few training tasks. 2 SETTING LetM be the space of possible tasks. Each µ ∈ M is associated to an episodic µ-MDP Mµ = (S,A, pµ, rµ, γ) whose dynamics pµ and rewards rµ are task dependent, while state and action spaces are shared across tasks and γ is the discount factor. The descriptor µ can be a simple id (µ ∈ N) or a set of parameters (µ ∈ Rd). When the reward function and the transition probabilities are unknown, RL agents need to devise a strategy that balances exploration to gather information about the system and exploitation to maximize the cumulative reward. Such a strategy can be defined as the solution of a partially observable MDP (POMDP), where the hidden variable is the descriptor µ of the MDP. Given a trajectory τt = (s1, a1, r1, . . . , st−1, at−1, rt−1, st), a POMDP policy π(at|τt) maps the trajectory to actions. In particular, the optimal policy in a POMDP is a history-dependent policy that uses τt to construct a belief state bt, which describes the uncertainty about the task at hand, and then maps it to the action that maximizes the expected sum of rewards (e.g. Kaelbling et al., 1998). In this case, maximizing the rewards may require taking explorative actions that improve the belief state enough so that future actions are more effective in collecting reward. The task is sampled at the beginning of an episode from a distribution q(µ). After training, the agent returns a policy π(at|τt) that aims at maximizing the average performance across tasks generated from q, i.e., Eµ∼q(µ) [ |τ |∑ t=1 γt−1rµt ∣∣∣∣π]. (1) where the expectation is taken over a full-episode trajectory τ and task distribution q, and |τ | is the length of the trajectory. The objective is then to find an architecture for π that is able to express strategies that perform the best according to Eq. 1 and, at the same time, can be efficiently learned even for moderately short training phases. At training time, we assume the agent has unrestricted access to the task descriptor µ. Access to such a task descriptor during training is a common assumption in the multi-task literature and captures a large variety of concrete problems. It can be of two types: i) a vector of features corresponding to (physical) parameters of the environment/agent (for instance, such features maybe available in robotics, or when learning on a simulator) (Yu et al., 2018; Mehta et al., 2019; Tobin et al., 2017). ii) It can be a single task identifier (i.e an integer) which is a less restrictive assumption (Choi et al., 2001; Humplik et al., 2019) and corresponds to different concrete problems: learning in a set of M training levels in a video game, learning to control M different robots or learning to interact with M different users. 3 RELATED WORK AND CONTRIBUTIONS In this section, we review how the online adaptation setting has been tackled in the literature. The main approaches are depicted in Fig. 2. We first compare the different methods in terms of expressiveness, and whether they leverage the efficient learning of informed policies. We then discuss learning task embeddings and how the various methods deal with unknown or irrelevant task descriptors. The last subsection summarizes our contributions. Evaluation of RL agent in Meta-Reinforcement Learning. The online adaptation evaluation setting is standard in the Meta-RL literature (Yu et al., 2017; Humplik et al., 2019) but is not the only way to evaluate agents on unseen tasks in the meta-RL literature. Indeed, several works have considered that given a new task, an agent is given an amount of ”free” interactions episodes or steps to perform system identification, then is evaluated on the cumulative reward on one (Bharadhwaj et al., 2019; Rakelly et al., 2019) or several execution episodes (Liu et al., 2020). This is different to what we study here where the agent has to identify the task to solve and solved it within one episode, the reward of the agent being considered during all these steps. Online Adaptation with Deep RL. In the previous section we mentioned that the best strategy corresponds to the optimal policy of the associated POMDP. Since the belief state bt is a sufficient statistic of the history τt, POMDP policies takes the form π(at|τt) = π(at|st, bt). While it is impractical to compute the exact belief state even for toy discrete problems, approximations can be learnt using Recurrent Neural Networks (RNNs) (Bakker, 2001; Heess et al., 2015). RNN-based policies are trained to maximize the cumulative reward and do not leverage task descriptors at train time. While this class of policies can represent rich exploratory strategies, their large training complexity makes them impractical. In order to reduce the training complexity of RNN policies, existing strategies have constrained the set of possible exploratory behaviors by leveraging privileged information about the task. ProbeThen-Exploit (PTE) (e.g. Zhou et al., 2019) works in two phases. First, it executes a pure exploratory policy with the objective of identifying the underlying task µ, i.e. maximizing the likelihood of the task, then runs the optimal policy associated to the estimated task. Both the probing and the informed policies are learned using task descriptors, leading to a much more efficient training process. PTE has two main limitations. First, similarly to explore-then-commit approaches in bandits (e.g. Garivier et al., 2016), the exploration can be suboptimal because it is not reward-driven: valuable time is wasted to estimate unnecessary information. Second, the switch between probing and exploiting is hard to tune and problem-dependent. Thompson Sampling (TS) (Thompson, 1933) leverages randomization to mix exploration and exploitation. Similarly to the belief state of an RNN policy, TS maintains a distribution over task descriptors that represents the uncertainty on the current task given τt. The policy samples a task from the posterior and executes the corresponding informed policy for several steps. Training is limited to learning informed policies together with a maximum likelihood estimator to map trajectories to distributions over tasks. This strategy proved successful in a variety of problems (e.g. Chapelle & Li, 2011; Osband & Roy, 2017). However, as shown in Fig. 1, TS cannot represent certain probing policies because it is constrained to executing informed policies. Another drawback of TS approaches is that the re-sampling frequency needs to be carefully tuned. The Task Inference (TI) approach (Humplik et al., 2019) is a RNN trained to simultaneously learn a good policy and predict the task descriptor µ. Denoting by m : H → Z the mapping from histories to a latent representation of the belief state (Z ⊆ Rd), the policy π(at|zt) selects the action based on the representation zt = m(τt) constructed by the RNN. During training, zt is also used to predict the task descriptor µ, using the task-identification module g : Z →M. The overall objective is: E [ |τ |∑ t=1 γt−1rµt ∣∣∣π]+ βE[ |τ |∑ t=1 `(µ, g(zt)) ∣∣∣π] (2) where `(µ, g(zt)) is the log-likelihood of µ under distribution g(zt). The auxiliary loss is meant to structure the memory of the RNN m rather than be an additional reward for the policy, so training is done by ignoring the effect of m on π when computing the gradient of the auxiliary loss with respect to m. Humplik et al. (2019) proposed two variants, AuxTask and TI, described in Fig. 2 (b) and (c). In TI, the gradient of the policy sub-network is not backpropagated through the RNN (the dashed green arrow in Fig. 2c, and the policy subnetwork receives the original state features as additional input. For both AuxTask and TI, the training of π in TI is purely reward-driven, so they do not suffer from the suboptimality of PTE/TS. However, in contrast to PTE/TS, they do not leverage the smaller sample complexity of training informed policies, and the auxiliary loss is defined over the whole value of µ while only some dimensions may be relevant to solve the task. Learning Task Embeddings While in principle the minimal requirement for the approaches above is access to task identifiers, i.e. one-hot encodings of the task, these approaches are sensitive to the encoding on task descriptions, and prior knowledge on them. In particular, irrelevant variables have a significant impact on PTE approaches since the probing policy aims at identifying the task. For instance, an agent might waste time reconstructing the full µ when only part of µ is needed to act optimally w.r.t the reward. Moreover, TS, TI and AuxTask are guided by a prior distribution over µ that has to be chosen by hand to fit the ground-truth distribution of tasks. Rakelly et al. (2019) proposed to use a factored Gaussian distribution over transitions as a task embedding architecture rather than a RNN. Several approaches have been proposed to learn task embeddings (Gupta et al., 2018; Rakelly et al., 2019; Zintgraf et al., 2019; Hausman et al., 2018). The usual approach is to train embeddings of task identifiers jointly with the policies. Humplik et al. (2019) mentions using TI with task embeddings, but the embeddings are pre-trained separately, which requires either additional interactions with the environment or expert traces. Nonetheless, we show in our experiments that TI can be used with task descriptors, considering task prediction as a multiclass classification problem. Summary of the contributions As for RNN/TI, IMPORT learns an RNN policy to maximize cumulative reward, with no decoupling between probing and exploitation. As such, our approach does not suffer from scheduling difficulties instrinsic to PTE/TS approaches. On the other hand, similarly to PTE/TS and contrarily to RNN/TI, IMPORT leverages the fast training of informed policies through a joint training of an RNN and an informed policy. In addition, IMPORT does not rely on probabilistic models of task descriptors. Learning task embeddings makes the approach robust to irrelevant task descriptors contrary to TI, makes IMPORT applicable when only task identifiers are available and able to better generalize when few training tasks are available.‘ Algorithm 1 IMPORT Training Initialize σ, ω, θ randomly for k = 1, . . . ,K do if k is odd then Collect M transitions following πH Update σ, ω and the parameters of the value function of (A) based on objective (A) + (C) else Collect M transitions following πµ Update σ, θ, ω and the parameters of the value function of (B) based on objective (B) + (C) end if end for 4 METHOD In this section, we describe the main components of the IMPORT model (described in Fig. 2), as well as the online optimization procedure and an additional auxiliary loss to further speed-up learning. Our approach leverages the knowledge of the task descriptor µ and informed policies to construct a latent representation of the task that is purely reward driven. Since µ is unknown at testing time, we use this informed representation to train a predictor based on a recurrent neural network. To leverage the efficiency of informed policies even in this phase, we propose an architecture sharing parameters between the informed policy and the final policy such that the final policy will benefit from parameters learned with privileged information. The idea is to constrain the final policy to stay close to the informed policy while allowing it to perform probing actions when needed to effectively reduce the uncertainty about the task. We call this approach InforMed POlicy RegularizaTion (IMPORT). Formally, we denote by πµ(at|st, µ) and πH(at|τt) the informed policy and the history-dependent (RNN) policy that is used at test time. The informed policy πµ = φ ◦ fµ is the functional composition of fµ and φ, where fµ :M→ Z projects µ in a latent space Z ⊆ Rk and φ : S × Z → A selects the action based on the latent representation. The idea is that fµ(µ) captures the relevant information contained in µ while ignoring dimensions that are not relevant for learning the optimal policy. This behavior is obtained by training πµ directly to maximize the task reward rµ. While πµ leverages the knowledge of µ at training time, πH acts based on the sole history. To encourage πH to behave like the informed policy while preserving the ability to probe, πH and πµ share φ, the mapping from latent representations to actions. We thus define as πH = φ ◦ fH where fH : H → Z encodes the history into the latent space. By sharing the policy head φ, the approximate belief state constructed by the RNN is mapped to the same latent space as µ. When the uncertainty about the task is small, πH then benefits from the joint training with πµ. More precisely, let θ, ω, σ the parameters of φ, fH and fµ respectively, so that πσθµ (at|st, µ) = φθ ◦ fσµ = φθ(at|st, fσµ (µ)) and πωθH (at|τt) = φθ ◦ fωH = φθ(at|st, fωH(τt)). The goal of IMPORT is to maximize over θ, ω, σ the objective function defined in Eq. 3. E [ |τ |∑ t=1 γt−1rµt ∣∣∣∣πω,θH ]︸ ︷︷ ︸ (A) +E [ |τ |∑ t=1 γt−1rµt ∣∣∣∣πσ,θµ ]︸ ︷︷ ︸ (B) −βE [ |τ |∑ t=1 D ( fµ(µ), fH(τt) ) ] ︸ ︷︷ ︸ (C) (3) Speeding Up the Learning. The optimization of (B) in Eq. 3 produces a reward-driven latent representation of the task through fµ. In order to encourage the history-based policy to predict a task embedding close to the one predicted by the informed policy, we augment the objective with an auxiliary loss (C) weighted by β > 0. D is the squared 2-norm in our experiments. Note that because we treat the objective (C) as an auxiliary loss, only the average gradient of D with respect to fH is backpropagated, ignoring the effect of fH on πH . The expectation of (C) is optimized over trajectories generated using πω,θH and π σ,θ µ , respectively used to compute (A) and (B). Optimization. IMPORT is trained using Advantage Actor Critic (A2C) (Mnih et al., 2016) with generalized advantage estimation (GAE) (Schulman et al., 2015). There are two value functions1, one for each objective (A) and (B). The algorithm is summarized in Alg. 1. Each iteration collects a batch of M transitions using either πH or πµ.2 If the batch is sampled according to πH , we update with A2C-GAE the parameters of the policy ω and θ according to both objectives (A) and (C), as well as the parameters of the value function associated to objective (A). If the batch is sampled according to πµ, we update with A2C-GAE the parameters of the policy σ and θ according to both objectives (B) and (C), as well as the parameters of the value function associated to objective (B). 5 EXPERIMENTS We performed experiments on five environments. The CartPole and Acrobot environments from OpenAI Gym (Brockman et al., 2016), where the task descriptor µ represents parameters of the physical system, e.g., the weight of the cart, the size of the pole, etc. The dimension of µ is 5 for Cartpole and 7 for Acrobot. The entries of µ are normalized in [−1, 1] and sampled uniformly. These environments provide basic comparison points where the optimal exploration/exploitation policy is relatively straightforward, since the dynamics can be inferred from a few actions. The Bandit environment is a standard Bernoulli multi-armed bandit problem with K arms. The vector µ ∈ RK denotes the probability of success of the independent Bernoulli distributions. Each dimension of µ is sampled uniformly between 0 and 0.5, the best arm is randomly selected and associated to a probability of 0.9. An episode is 100 arm pulls. At every timestep the agent is allowed to pull an arm 1In our implementation, the value network is shared and takes as an input either fµ(µ) or fH(τt). 2In practice, data collection is multithreaded. We collect 20 transitions per thread with 24 to 64 threads depending on the environment, based on available GPU memory in [1, K] and observes the resulting reward. Although relatively simple, this environment assesses the ability of algorithms to learn nontrivial probing/exploitation strategies. The Tabular MDP environment is a finite MDP with S states and A actions such that the transition matrix is sampled from a flat Dirichlet distribution, and the reward function is sampled from a uniform distribution in [0, 1] as in Duan et al. (2016). In that case, µ is the concatenation of the transition and the reward functions, resulting in a vector of size S2A + SA. This environment is much more challenging as µ is high-dimensional, there is nearly complete uncertainty on the task at hand and each task is a reinforcement learning problem. Finally, the Maze 3D environment is a 3D version of the toy problem depicted in Fig. 1, implemented using gym-miniworld (Chevalier-Boisvert, 2018). It has three discrete actions (forward, left, right) and the objective is to reach one of the two possible goals (see Figure 15 in appendix), resulting in a reward of +1 (resp. −1) when the correct (resp. wrong) goal is reached. The episode terminates when the agent touches a box or after 100 steps. The agent always starts at a random position, with a random orientation. The information about which goal to reach at each episode is encoded by the use of two different textures on the wall located at the opposite side of the maze w.r.t. the goals. This domain allows to evaluate the models when observations are high dimensional (3 × 60 × 60 RGB images). The maximum episode length is 100 on CartPole, Bandit, Tabular-MDP and Maze3D, and 500 on Acrobot. To evaluate the ability of IMPORT and the baselines to deal with different types of task descriptors µ, we also perform experiments on CartPole and Tabular-MDP in the setting where µ is only a task identifier (i.e., a one-hot vector representing the index of the training task) which is a very weak supervision available at train time. We compare to previously discussed baselines. First, a vanilla RNN policy (Heess et al., 2015) using GRUs that never uses µ. Second, we compare to TS, TI and AuxTask, with µ only observed at train time, similarly to IMPORT. For TS, at train time, the policy conditions on the true µ, whereas at test time, the policy conditions on an estimated µ̂ resampled from the posterior every k steps where k ∈ {1, 5, 10, 20}. On bandits, UCB (Auer, 2002) with tuned exploration parameters is our topline. Implementation details Contrarily to IMPORT, TS, TI and AuxTask are based on maximizing the log-likelihood of µ. When using informative task descriptors (i.e. a vector of real values), the log-likelihood uses a Gaussian distribution with learnt mean and diagonal covariance matrix. For the bandit setting, we have also performed experiments using a beta distribution which may be more relevant for this type of problem. When using task identifiers, a multinomial distribution is used. All approaches are trained using A2C with Generalized Advantage Estimation (Mnih et al., 2016; Schulman et al., 2015). The precise values of the hyper-parameters and architectures are given in Appendix B.2. All approaches use similar network architectures with the same number of hidden layers and units. Evaluation The meta-learning scenario is implemented by sampling N training tasks, N validation tasks and 10, 000 test tasks with no overlap between task sets (except in Maze3D where there is only two possible tasks). Each sampled training task is given a unique identifier. Each model is trained on the training tasks, and the best model is selected on the validation tasks. We report the performance on the test tasks, averaged over three trials with different random seeds, corresponding to different sets of train/validation/test tasks. Training uses a discount factor, but for validation and test, we compute the undiscounted cumulative reward on the validation/test tasks. The learning curves show test reward as a function of the environment steps. They are the average of the three curves associated to the best validation model of each of the three seeds used to generate different tasks sets. Overall performances. IMPORT performs better than its competitors in almost all the settings. For instance, on CartPole with 10 tasks (see Table 1), our model reaches 94.4 reward while TI reaches only 91.5. Qualitatively similar results are found on Acrobot (Table 5 in Appendix), as well as on Bandit with 20 arms (Table 3), even though AuxTask performs best with only 10 arms. IMPORT particularly shines when µ encodes complex information, as on Tabular-MDP (see Table 2) where it outperforms all baselines in all settings. By varying the number of training tasks on CartPole and Acrobot, we also show that IMPORT’s advantage over the baselines is larger with fewer training tasks. In all our experiments, as expected, the vanilla RNN performs worse than the other algorithms. Sample Efficiency. Figure 5 shows the convergence curves on CartPole with 10 and 100 training tasks and are representative of what we obtain on other environments (see Appendix). IMPORT tends to converge faster than the baselines. We also observe a positive effect of using the auxiliary loss (β > 0) on sample efficiency, in particular with few training tasks. Note that using the auxiliary loss is particularly efficient in environments where the final policy tends to behave like the informed on. Influence of µ. The experiments with uninformative µ (i.e., task identifiers) reported in Table 1 and 2 for CartPole and Tabular-MDP respectively show that the methods are effective even when the task descriptors do not include any prior knowledge. In the two cases, IMPORT can use these tasks descriptors to generalize well. Moreover, experimental results on CartPole (Fig. 11) and Tabular MDP (Fig. 17) suggest that when µ is a vector of features (and not a task identifier only) , it improves sample efficiency but does not change the final performance. This can be explained by the fact that informed policies are faster to learn with features in µ since, in that case, µ is capturing similarities between tasks. Equivalent performance of IMPORT on both types of task descriptors is observed and shows that our method can deal with different (rich and weak) task descriptors. We further analyze the impact of the encoding of µ on the models, by using non-linear projections of the informative µ to change the shape of the prior knowledge. Figure 5c shows the learning curves of TI and IMPORT on CartPole with task identifiers, the original µ and polynomial expansions of µ of order 2 and 3, resulting in 21 and 56 features. IMPORT’s task embedding approach is robust to the encoding of µ, while TI’s log-likelihood approach underperforms with the polynomial transformation. Task embeddings. To have a qualitative assessment of the task embedding learnt by IMPORT, we consider a bandit problem with 10 arms and embedding dimension 16. Figure 6 shows the clusters of task embeddings obtained with t-SNE (Maaten & Hinton, 2008). Each cluster maps to an optimal arm, showing that IMPORT structures the embedding space based on the relevant information. In K = 10 K = 20 IMPORT 77.5(0.2) 56.6(0.1) AuxTask (Gaussian) 78.7(0.4) 50.5(1.6) AuxTask (Beta) 78.2(0.7) 37.1(0.6) RNN 73.6(0.7) 32.1(1.2) TI (Gaussian) 73.7(1.6) 41.4(2.4) TI (Beta) 79.5(0.1) 53.3(2.4) TS (Gaussian) 50.4(0.4) 38.8(2.0) TS (Beta) 41.3(1.5) 36.3(1.1) UCB 78.5(0.3) 68.2(0.4) Table 3: Bandits performance for K = 10 and K = 20 arms, with N = 100 training tasks. Figure 6: Task embeddings learnt on Bandit (10 arms). Colors indicate the best arm. addition, we have studied the influence of the β hyperparameter from Eq. 3 (in Fig. 4 and Section D). It shows that the auxiliary loss helps to speed-up the learning process, but is not necessary to achieve great performance. High dimensional input space. We show the learning curves on the Maze3D environment in Figure 5d. IMPORT is succeeding in 90% of cases (reward ≈ 0.8), while TI succeeds only in 70% of cases. This shows that IMPORT is even more effective with high-dimensional observations (here, pixels). IMPORT and TI benefit from knowing µ at train time, which allows them to rapidly identify that the wall texture behind the agent is informative, while the vanilla RNN struggles and reaches random goals. TS is not reported since this environment is a typical failure case as discussed in Fig.1. Additional results. In Appendix C.1, we show that IMPORT outperforms TI by a larger margin when the task embedding dimension is small. We also show that IMPORT outperforms its competitors in dynamic environments, i.e., when the task changes during the episode. 6 CONCLUSION We proposed a new policy architecture for meta reinforcement learrning. The IMPORT model is trained only on the reward objective, and leverages the informed policy to discover effective trade-offs between exploration and exploitation. It is thus able to learn better strategies than Thompson Sampling approaches, and faster than recurrent neural network policies and Task Inference approaches. A THE IMPORT ALGORITHM The algorithm is described in details in Algorithm 2. In our implementation, the value function network used for (A) and (B) is the same, i.e. shared. We specialize the input, i.e. for (A) the input will be (st, fH(τt)) and (st, fµ(µt)) for (B). Algorithm 2 Details of IMPORT Training Initialize σ, ω, θ, ν arbitrarily Hyperparameters: Number of iterations K, Number of transitions per update steps M , discount factor γ, GAE parameter γGAE , Adam learning rate η, weighting of the (C) objective β, weighting of the entropy objective λh, weighting of the critic objective λc Optim = Adam(η) for k = 1, . . . ,K do if k is odd then Collect M transitions according to πH in buffer BH . else Collect M transitions according to πµ in buffer Bµ. end if δσ, δω, δθ = 0, 0, 0 Rµ ← compute gae returns(Bµ, γGAE) RH ← compute gae returns(BH , γGAE) δθ,ω += 1 |BH | ∑ b∈BH ∑T t=1[R µ,b t − Vν(sbt , zbt )]∇θ,ω log πH(abt |sbt , zbt ) δθ,ω += λh |BH | ∑ b∈BH ∑T t=1∇θ,ωH ( πH(a b t |sbt , zbt ) ) δω −= 2β|BH | ∑ b∈BH ∑T t=1[f ω H(s b t , z b t )− fµ(sbt , µbt)]∇ωfωH(sbt , zbt ) δν −= 2λc|BH | ∑ b∈BH ∑T t=1[R H,b t − Vν(sbt , zbt )]∇νVν(sbt , zbt ) δθ,σ += 1 |Bµ| ∑ b∈Bµ ∑T t=1[R H,b t − Vν(sbt , µbt)]∇θ,σ log πµ(abt |sbt , µbt) δθ,σ += λh |Bµ| ∑ b∈Bµ ∑T t=1∇θ,σH ( πµ(a b t |sbt , µbt) ) δν −= 2λc|Bµ| ∑ b∈Bµ ∑T t=1[R µ,b t − Vν(sbt , µbt)]∇νVν(sbt , µbt) θ ← Optim(θ, δθ) ω ← Optim(ω, δω) σ ← Optim(σ, δσ) ν ← Optim(ν, δν) end for B IMPLEMENTATION DETAILS B.1 DATA COLLECTION AND OPTIMIZATION We focus on on-policy training for which we use the actor-critic method A2C (Mnih et al., 2016) algorithm with generalized advantage estimation. We use a distributed execution to accelerate experience collection. Several worker processes independently collect trajectories. As workers progress, a shared replay buffer is filled with trajectories and an optimization step happens when the buffer’s capacity bs is reached. After model updates, replay buffer is emptied and the parameters of all workers are updated to guarantee synchronisation. B.2 NETWORK ARCHITECTURES The architecture of the different methods remains the same in all our experiments, except that the number of hidden units changes across considered environments and we consider convolutional neural networks for the Maze3d environment. A description of the architectures of each method is given in Fig. 2. Unless otherwise specified, MLP blocks represent single linear layers activated with a tanh function and their output size is hs. All methods aggregate the trajectory into an embedding zt using a GRU with hidden size hs. Its input is the concatenation of representations of the last action at−1 and current state st obtained separately. Actions are encoded as one-hot vectors. When episodes begin, we initialize the last action with a vector of zeros. For bandits environments, the current state corresponds to the previous reward. TS uses the same GRU architecture to aggregate the history into zt. All methods use a softmax activation to obtain a probability distribution over actions. The use of the hidden-state zt differs across methods. While RNNs only use zt as an input to the policy and critic, both TS and TI map zt to a belief distribution that is problem-specific, e.g. Gaussian for control problems, Beta distribution for bandits, and a multinomial distribution for Maze and CartPole-task environments. For instance, zt is mapped to a Gaussian distribution by using two MLPs whose outputs of size |µ| correspond to the mean and variance. The variance values are mapped to [0, 1] using a sigmoid activation. IMPORT maps zt to an embedding fH , whereas the task embedding fµ is obtained by using a tanh-activated linear mapping of µt. Both embeddings have size hsµ, tuned by cross-validation onto a set of validation tasks. The input of the shared policy head φ is the embedding associated with the policy to use, i.e. either fH when using πH or fµ when using fµ. For the Maze3d experiment and in all methods, we pre-process the pixel input st with three convolutional layers (with output channels 32, stride is 2 and respective kernel sizes are 5, 5 and 4) and LeakyReLU activation. We also use a batch-norm after each convolutional layer. The output is flattened, linearly mapped to a vector of size hs and tanh-activated. C EXPERIMENTS In this section, we explain in deeper details the environments and the set of hyper-parameters we considered. We add learning curves of all experiments to supplement results from Table 1, 2, 3 and 5 in order to study sample efficiency. Task descriptor. Note that for CartPole and Acrobot µ is normalized to be in [−1, 1]D where D is the task descriptor dimension. The task distribution q is always uniform, see the description of the environments for details. For experiments with task identifiers, we associate to each sampled task an integer value corresponding to the order of generation, and encode it usong a one-hot vector. Hyperparameters. Hyperparameter ranges are specified in Table 4. For TS, we consider sampling µ from the posterior dynamics distribution every k steps with k ∈ {1, 5, 10, 20}. C.1 CARTPOLE. We consider the classic CartPole control environment where the environment dynamics change within a set M (|µ| = 5) described by the following physical variables: gravity, cart mass, pole mass, pole length, magnetic force. Their respective pre-normalized domains are [4.8, 14.8], [0.5, 1.5], [0.01, 0.19], [0.2, 0.8], and [−10, 10]. The value of µ are uniformly sampled. Knowing some components of µ might not be required to behave optimally. The discrete action space is {−1, 1}. Episode length is T = 100. Final performance and sample efficiency. Table 1 shows IMPORT’s performance is marginally superior to other methods in most settings. Learning curves in Figure 7 allow analyzing the sample efficiency of the different methods. Overall, IMPORT is more sample efficient than other methods in the privileged information µ setting. Moreover, the use of the auxiliary loss (β > 0) usually speed-up the learning convergence by enforcing the RNN to quickly produce a coherent embedding. We can see that only sharing parameters (β = 0) already helps improving over RNNs. Non-stationary environments. We consider the non-stationary version of CarPole environment where at each timestep, there is a probability ρ = 0.05 to sample a new dynamic µ. Table 8 shows that the performance of IMPORT, AuxTask and TI are comparable in these settings. Size of built embeddings. We now study the impact of the task embedding representation size. As can be seen from Figure 10, IMPORT’s performance remains stable for different representation sizes in {2, 4, 8, 16} whereas TI’s sample efficiency decreases with this dimension. Trajectory and task embeddings. In Figure 11, we plot both the evolution of fH(τt) during an episode of the final model obtained training IMPORT with two-dimensional task embeddings on CartPole with task identifiers (left) and task embedding fµ(µ) learnt by the informed policy (right). As expected, the history embedding gets close to the task embedding after just a few timesteps (left). Interestingly, task embeddings fµ(µ) are able to capture relevant information from the task. For instance, they are highly correlated with the magnetic force which is a very strong factor to “understand” from each new environment to control the system correctly. At the opposite, gravity is less correlated since it does not influence the optimal policy – whatever the gravity is, if the pole is on the left, then you have to go right and vice-versa. Acrobot consists of two joints and two links, where the joint between the two links is actuated. Initially, the links are hanging downwards, and the goal is to swing the end of the lower link up to a given height. Environment dynamics are determined by the length of the two links, their masses, their maximum velocity. Their respective pre-normalized domains are [0.5, 1.5], [0.5, 1.5], [0.5, 1.5], [0.5, 1.5], [3π, 5π] and [7π, 11π]. Unlike CartPole, the environment is stochastic because the simulator applies noise to the applied force. The action space is {−1, 0, 1}. We also add an extra dynamics parameter which controls whether the action order is inverted, i.e. {1, 0,−1}, thus |µ| = 7. Episode length is 500. IMPORT outperforms all baselines in settings with small training task sets (Figure 12 and Table 5) and perform similarly to TI on larger training task sets. C.3 BANDITS The Bandit environment is a standard Bernoulli multi-armed bandit problem with K arms. The vector µ ∈ RK denotes the probability of success of the independent Bernoulli distributions. Each dimension of µ is sampled uniformly between 0 and 0.5, the best arm is randomly selected and associated to a probability of 0.9. Although relatively simple, this environment assesses the ability of algorithms to learn nontrivial exploration/exploitation strategies. Note that it is not surprising that UCB outperforms the other algorithms in this setting. UCB is an optimal algorithm for MAB and we have optimized it for achieving the best empirical performance. Moreover, IMPORT cannot leverage correlations between tasks since, due to the generation process, tasks are independent. We visualize the task embeddings learnt by the informed policy in 13. C.4 MAZE3D ENVIRONMENT The Maze 3D environment (Figure 15) is a continuous maze problem implemented using gymminiworld (Chevalier-Boisvert, 2018), with 3 discrete actions (forward, left, right) where the objective is to reach one of the two possible goals, resulting in a reward of +1 (resp. −1) when the correct (resp. wrong) goal is reached. If a box is touched, the episode ends. The maze’s axis range from -40 to 40, the two turn actions (left, right) modify the angle by 45 degrees, and the forward action is a 5 length move. The agent starts in a random position with a random orientation. The information about which goal to reach at each episode is encoded by the use of two different textures on the wall located on the opposite side of the boxes. In this way, the agent cannot simultaneously observe both boxes and the “informative” wall. This environment allows to evaluate the models in a setting where the observation is a high dimensional space (3x60x60 RGB image). The mapping between the RGB image and the task target in {−1, 1} is challenging and the informed policy should provide better auxiliary task targets than TI thanks to the “easy” training of the informed policy. IMPORT outperforms TI on this environment (Figure 16) in both final performance and sample efficiency. C.5 TABULAR MDPS Tabular MDP (Duan et al., 2016) is a MDP with S discrete states andA actions such that the transition matrix is sampled from a flat Dirichlet distribution, and the reward function is sampled from a uniform distribution in [0, 1]. The task identifier µ is a concatenation of the transition and reward functions resulting in a vector of size S2A+ SA, allowing to test the models with high-dimensional µ. IMPORT outperforms all baselines in all settings (Figure 17 and Table 2). D IMPACT OF THE β HYPERPARAMETER We study the sensibility of the β parameter on IMPORT. Figure 18 clearly shows the benefits of using the auxiliary objective. On all but the Tabular-MDP environments, the recurrent policy successfully leverages the auxiliary objective to improve both sample efficiency and final performance for Acrobot.
1. What is the focus of the paper regarding explore/exploit tradeoffs in RL environments? 2. What are the strengths of the proposed alternative architecture, particularly in comparison to previous methods? 3. Do you have any concerns about the significance of the paper's contribution to the community? 4. How might the method be adapted to work in combination with other approaches in dynamic environments? 5. Could the method be applied to a broader range of domains requiring adaptation or exploration/exploitation strategies?
Review
Review The authors propose an alternative architecture to handle explore/exploit tradeoffs in RL environments where each task instance may change in such a way that the policy needs to change in order to be optimal. Rather than using an explicit task inference process, and rather than relying on an RNN to slowly learn the distribution implicitly, the task id is observed during training instances and both an embedding and an RNN are trained, such that the two are interchangeable. Thus, during testing, the task id is not needed and only the RNN state is used to condition the policy. It is a straightforward way to include privileged information during training without imposing the burden of reconstruction. The paper is clearly written and Fig 1 is very helpful to understanding the details of the architecture. The experiments are clearly explained. The main question as a reviewer is whether the paper has significance to the community. Although it is only a small architectural contribution, the method works impressively well. It is faster to learn than Task Inference and achieves higher scores than Thompson Sampling. It would be nice to know if the method could work in combination with other methods to quickly adapt in dynamic environments, given some labels for different features of the environments. In general, using an interchangeable embedding and RNN state is a good way to avoid the challenges of conditional architectures. The paper could be stronger if the method was framed more generally and it was shown that it could be useful on a broader range of domains that require adaptation or exploration/exploitation strategies.
ICLR
Title Meta-Reinforcement Learning With Informed Policy Regularization Abstract Meta-reinforcement learning aims at finding a policy able to generalize to new environments. When facing a new environment, this policy must explore to identify its particular characteristics and then exploit this information for collecting reward. We consider the online adaptation setting where the agent needs to trade-off between the two types of behaviour within the same episode. Even though policies based on recurrent neural networks can be used in this setting by training them on multiple environments, they often fail to model this trade-off, or solve it at a very high computational cost. In this paper, we propose a new algorithm that uses privileged information in the form of a task descriptor at train time to improve the learning of recurrent policies. Our method learns an informed policy (i.e., a policy receiving as input the description of the current task) that is used to both construct task embeddings from the descriptors, and to regularize the training of the recurrent policy through parameters sharing and an auxiliary objective. This approach significantly reduces the learning sample complexity without altering the representational power of RNNs, by focusing on the relevant characteristics of the task, and by exploiting them efficiently. We evaluate our algorithm in a variety of environments that require sophisticated exploration/exploitation strategies and show that it outperforms vanilla RNNs, Thompson sampling and the task-inference approaches to meta-reinforcement learning. 1 INTRODUCTION Deep Reinforcement Learning has been used to successfully train agents on a range of challenging environments such as Atari games (Mnih et al., 2013; Bellemare et al., 2013; Hessel et al., 2017) or continuous control (Peng et al., 2017; Schulman et al., 2017). Nonetheless, in these problems, RL agents perform exploration strategies to discover the environment and implement algorithms to learn a policy that is tailored to solving a single task. Whenever the task changes, RL agents generalize poorly and the whole process of exploration and learning restarts from scratch. On the other hand, we expect an intelligent agent to fully master a problem when it is able to generalize from a few instances (tasks) and achieve the objective of the problem under many variations of the environment. For instance, children know how to ride a bike (i.e., the problem) when they can reach their destination irrespective of the specific bike they are riding, which requires to adapt to the weight of the bike, the friction of the brakes and tires, and the road conditions (i.e., the tasks). How to enable agents to generalize across tasks has been studied in Multi-task Reinforcement Learning (e.g. Wilson et al., 2007; Teh et al., 2017), Transfer Learning (e.g. Taylor & Stone, 2011; Lazaric, 2012) and Meta-Reinforcement Learning (Finn et al., 2017; Hausman et al., 2018; Rakelly et al., 2019; Humplik et al., 2019). These works fall into two categories. Learning to learn approaches aim at speeding up learning on new tasks, by pre-training feature extractors or learning good initializations of policy weights (Raghu et al., 2019). In contrast, we study in this paper the online adaptation setting where a single policy is trained for a fixed family of tasks. When facing a new task, the policy must then balance exploration (or probing), to reduce the uncertainty about the current task, and exploitation to maximize the cumulative reward of the task. Agents are evaluated on their ability to manage this trade-off within a single episode of the same task. The online adaptation setting is a special case of a partially observed markov decision problem, where the unobserved variables are the descriptors of the current task. It is thus G1 G2 sign start possible to rely on recurrent neural networks (RNNs) (Bakker, 2001; Heess et al., 2015), since they can theoretically represent optimal policies in POMDPs if given enough capacity. Unfortunately, the training of RNN policies has often prohibitive sample complexity and it may converge to suboptimal local minima. To overcome this drawback, efficient online adaptation methods leverage the knowledge of the task at training time. The main approach is to pair an exploration strategy with the training of informed policies, i.e. policies taking the description of the current task as input. Probe-then-Exploit (PTE) algorithms (e.g. Zhou et al., 2019) operate in two stages. They first rely on an exploration policy to identify the task. Then, they commit to the identified task by playing the associated informed policy. Thompson Sampling (TS) approaches (Thompson, 1933; Osband et al., 2016; 2019) maintain a distribution over plausible tasks and play the informed policy of a task sampled from the posterior following a predefined schedule. PTE and TS are expected to be sample-efficient relatively to RNNs as learning informed policies is a fully observable problem. However, as we discuss in Section 3, PTE and TS cannot represent effective exploration/exploitation policies in many environments. Humplik et al. (2019) proposed an alternative approach, Task Inference (TI), which trains a full RNN policy with the current task prediction as an auxiliary loss. TI avoids the suboptimality of PTE/TS by not constraining the structure of the exploration/exploitation policy. However, in TI, the task descriptors are used as targets and not as inputs, so TI focuses on reconstructing even irrelevant features of the task descriptor and it does not leverage the faster learning of informed policies. In this paper, we introduce IMPORT (InforMed POlicy RegularizaTion), a novel policy architecture for efficient online adaptation that combines the rich expressivity of RNNs with the efficient learning of informed policies. At train time, a shared policy head receives as input the current observation, together with either a (learned) embedding of the current task, or the hidden state of an RNN such that the informed policy and the RNN policy are learned simultaneously. At test time, the hidden state of the RNN replaces the task embedding, and the agent acts without having access to the current task. This leads to several advantages: 1) IMPORT benefits from informed policy to speed up learning; 2) it avoids to reconstruct features of the task descriptor that are irrelevant for learning; and as a consequence, 3) it adapts faster to unknown environments, showing better generalization capabilities. We evaluate IMPORT against the main approaches to online adaptation on environments that require sophisticated exploration/exploitation strategies. We confirm that TS suffers from its limited expressivity, and show that the policy regularization of IMPORT significantly speeds up learning compared to TI. Moreover, the learnt task embeddings of IMPORT make it robust to irrelevant or minimally informative task descriptors, and able to generalize when learning on few training tasks. 2 SETTING LetM be the space of possible tasks. Each µ ∈ M is associated to an episodic µ-MDP Mµ = (S,A, pµ, rµ, γ) whose dynamics pµ and rewards rµ are task dependent, while state and action spaces are shared across tasks and γ is the discount factor. The descriptor µ can be a simple id (µ ∈ N) or a set of parameters (µ ∈ Rd). When the reward function and the transition probabilities are unknown, RL agents need to devise a strategy that balances exploration to gather information about the system and exploitation to maximize the cumulative reward. Such a strategy can be defined as the solution of a partially observable MDP (POMDP), where the hidden variable is the descriptor µ of the MDP. Given a trajectory τt = (s1, a1, r1, . . . , st−1, at−1, rt−1, st), a POMDP policy π(at|τt) maps the trajectory to actions. In particular, the optimal policy in a POMDP is a history-dependent policy that uses τt to construct a belief state bt, which describes the uncertainty about the task at hand, and then maps it to the action that maximizes the expected sum of rewards (e.g. Kaelbling et al., 1998). In this case, maximizing the rewards may require taking explorative actions that improve the belief state enough so that future actions are more effective in collecting reward. The task is sampled at the beginning of an episode from a distribution q(µ). After training, the agent returns a policy π(at|τt) that aims at maximizing the average performance across tasks generated from q, i.e., Eµ∼q(µ) [ |τ |∑ t=1 γt−1rµt ∣∣∣∣π]. (1) where the expectation is taken over a full-episode trajectory τ and task distribution q, and |τ | is the length of the trajectory. The objective is then to find an architecture for π that is able to express strategies that perform the best according to Eq. 1 and, at the same time, can be efficiently learned even for moderately short training phases. At training time, we assume the agent has unrestricted access to the task descriptor µ. Access to such a task descriptor during training is a common assumption in the multi-task literature and captures a large variety of concrete problems. It can be of two types: i) a vector of features corresponding to (physical) parameters of the environment/agent (for instance, such features maybe available in robotics, or when learning on a simulator) (Yu et al., 2018; Mehta et al., 2019; Tobin et al., 2017). ii) It can be a single task identifier (i.e an integer) which is a less restrictive assumption (Choi et al., 2001; Humplik et al., 2019) and corresponds to different concrete problems: learning in a set of M training levels in a video game, learning to control M different robots or learning to interact with M different users. 3 RELATED WORK AND CONTRIBUTIONS In this section, we review how the online adaptation setting has been tackled in the literature. The main approaches are depicted in Fig. 2. We first compare the different methods in terms of expressiveness, and whether they leverage the efficient learning of informed policies. We then discuss learning task embeddings and how the various methods deal with unknown or irrelevant task descriptors. The last subsection summarizes our contributions. Evaluation of RL agent in Meta-Reinforcement Learning. The online adaptation evaluation setting is standard in the Meta-RL literature (Yu et al., 2017; Humplik et al., 2019) but is not the only way to evaluate agents on unseen tasks in the meta-RL literature. Indeed, several works have considered that given a new task, an agent is given an amount of ”free” interactions episodes or steps to perform system identification, then is evaluated on the cumulative reward on one (Bharadhwaj et al., 2019; Rakelly et al., 2019) or several execution episodes (Liu et al., 2020). This is different to what we study here where the agent has to identify the task to solve and solved it within one episode, the reward of the agent being considered during all these steps. Online Adaptation with Deep RL. In the previous section we mentioned that the best strategy corresponds to the optimal policy of the associated POMDP. Since the belief state bt is a sufficient statistic of the history τt, POMDP policies takes the form π(at|τt) = π(at|st, bt). While it is impractical to compute the exact belief state even for toy discrete problems, approximations can be learnt using Recurrent Neural Networks (RNNs) (Bakker, 2001; Heess et al., 2015). RNN-based policies are trained to maximize the cumulative reward and do not leverage task descriptors at train time. While this class of policies can represent rich exploratory strategies, their large training complexity makes them impractical. In order to reduce the training complexity of RNN policies, existing strategies have constrained the set of possible exploratory behaviors by leveraging privileged information about the task. ProbeThen-Exploit (PTE) (e.g. Zhou et al., 2019) works in two phases. First, it executes a pure exploratory policy with the objective of identifying the underlying task µ, i.e. maximizing the likelihood of the task, then runs the optimal policy associated to the estimated task. Both the probing and the informed policies are learned using task descriptors, leading to a much more efficient training process. PTE has two main limitations. First, similarly to explore-then-commit approaches in bandits (e.g. Garivier et al., 2016), the exploration can be suboptimal because it is not reward-driven: valuable time is wasted to estimate unnecessary information. Second, the switch between probing and exploiting is hard to tune and problem-dependent. Thompson Sampling (TS) (Thompson, 1933) leverages randomization to mix exploration and exploitation. Similarly to the belief state of an RNN policy, TS maintains a distribution over task descriptors that represents the uncertainty on the current task given τt. The policy samples a task from the posterior and executes the corresponding informed policy for several steps. Training is limited to learning informed policies together with a maximum likelihood estimator to map trajectories to distributions over tasks. This strategy proved successful in a variety of problems (e.g. Chapelle & Li, 2011; Osband & Roy, 2017). However, as shown in Fig. 1, TS cannot represent certain probing policies because it is constrained to executing informed policies. Another drawback of TS approaches is that the re-sampling frequency needs to be carefully tuned. The Task Inference (TI) approach (Humplik et al., 2019) is a RNN trained to simultaneously learn a good policy and predict the task descriptor µ. Denoting by m : H → Z the mapping from histories to a latent representation of the belief state (Z ⊆ Rd), the policy π(at|zt) selects the action based on the representation zt = m(τt) constructed by the RNN. During training, zt is also used to predict the task descriptor µ, using the task-identification module g : Z →M. The overall objective is: E [ |τ |∑ t=1 γt−1rµt ∣∣∣π]+ βE[ |τ |∑ t=1 `(µ, g(zt)) ∣∣∣π] (2) where `(µ, g(zt)) is the log-likelihood of µ under distribution g(zt). The auxiliary loss is meant to structure the memory of the RNN m rather than be an additional reward for the policy, so training is done by ignoring the effect of m on π when computing the gradient of the auxiliary loss with respect to m. Humplik et al. (2019) proposed two variants, AuxTask and TI, described in Fig. 2 (b) and (c). In TI, the gradient of the policy sub-network is not backpropagated through the RNN (the dashed green arrow in Fig. 2c, and the policy subnetwork receives the original state features as additional input. For both AuxTask and TI, the training of π in TI is purely reward-driven, so they do not suffer from the suboptimality of PTE/TS. However, in contrast to PTE/TS, they do not leverage the smaller sample complexity of training informed policies, and the auxiliary loss is defined over the whole value of µ while only some dimensions may be relevant to solve the task. Learning Task Embeddings While in principle the minimal requirement for the approaches above is access to task identifiers, i.e. one-hot encodings of the task, these approaches are sensitive to the encoding on task descriptions, and prior knowledge on them. In particular, irrelevant variables have a significant impact on PTE approaches since the probing policy aims at identifying the task. For instance, an agent might waste time reconstructing the full µ when only part of µ is needed to act optimally w.r.t the reward. Moreover, TS, TI and AuxTask are guided by a prior distribution over µ that has to be chosen by hand to fit the ground-truth distribution of tasks. Rakelly et al. (2019) proposed to use a factored Gaussian distribution over transitions as a task embedding architecture rather than a RNN. Several approaches have been proposed to learn task embeddings (Gupta et al., 2018; Rakelly et al., 2019; Zintgraf et al., 2019; Hausman et al., 2018). The usual approach is to train embeddings of task identifiers jointly with the policies. Humplik et al. (2019) mentions using TI with task embeddings, but the embeddings are pre-trained separately, which requires either additional interactions with the environment or expert traces. Nonetheless, we show in our experiments that TI can be used with task descriptors, considering task prediction as a multiclass classification problem. Summary of the contributions As for RNN/TI, IMPORT learns an RNN policy to maximize cumulative reward, with no decoupling between probing and exploitation. As such, our approach does not suffer from scheduling difficulties instrinsic to PTE/TS approaches. On the other hand, similarly to PTE/TS and contrarily to RNN/TI, IMPORT leverages the fast training of informed policies through a joint training of an RNN and an informed policy. In addition, IMPORT does not rely on probabilistic models of task descriptors. Learning task embeddings makes the approach robust to irrelevant task descriptors contrary to TI, makes IMPORT applicable when only task identifiers are available and able to better generalize when few training tasks are available.‘ Algorithm 1 IMPORT Training Initialize σ, ω, θ randomly for k = 1, . . . ,K do if k is odd then Collect M transitions following πH Update σ, ω and the parameters of the value function of (A) based on objective (A) + (C) else Collect M transitions following πµ Update σ, θ, ω and the parameters of the value function of (B) based on objective (B) + (C) end if end for 4 METHOD In this section, we describe the main components of the IMPORT model (described in Fig. 2), as well as the online optimization procedure and an additional auxiliary loss to further speed-up learning. Our approach leverages the knowledge of the task descriptor µ and informed policies to construct a latent representation of the task that is purely reward driven. Since µ is unknown at testing time, we use this informed representation to train a predictor based on a recurrent neural network. To leverage the efficiency of informed policies even in this phase, we propose an architecture sharing parameters between the informed policy and the final policy such that the final policy will benefit from parameters learned with privileged information. The idea is to constrain the final policy to stay close to the informed policy while allowing it to perform probing actions when needed to effectively reduce the uncertainty about the task. We call this approach InforMed POlicy RegularizaTion (IMPORT). Formally, we denote by πµ(at|st, µ) and πH(at|τt) the informed policy and the history-dependent (RNN) policy that is used at test time. The informed policy πµ = φ ◦ fµ is the functional composition of fµ and φ, where fµ :M→ Z projects µ in a latent space Z ⊆ Rk and φ : S × Z → A selects the action based on the latent representation. The idea is that fµ(µ) captures the relevant information contained in µ while ignoring dimensions that are not relevant for learning the optimal policy. This behavior is obtained by training πµ directly to maximize the task reward rµ. While πµ leverages the knowledge of µ at training time, πH acts based on the sole history. To encourage πH to behave like the informed policy while preserving the ability to probe, πH and πµ share φ, the mapping from latent representations to actions. We thus define as πH = φ ◦ fH where fH : H → Z encodes the history into the latent space. By sharing the policy head φ, the approximate belief state constructed by the RNN is mapped to the same latent space as µ. When the uncertainty about the task is small, πH then benefits from the joint training with πµ. More precisely, let θ, ω, σ the parameters of φ, fH and fµ respectively, so that πσθµ (at|st, µ) = φθ ◦ fσµ = φθ(at|st, fσµ (µ)) and πωθH (at|τt) = φθ ◦ fωH = φθ(at|st, fωH(τt)). The goal of IMPORT is to maximize over θ, ω, σ the objective function defined in Eq. 3. E [ |τ |∑ t=1 γt−1rµt ∣∣∣∣πω,θH ]︸ ︷︷ ︸ (A) +E [ |τ |∑ t=1 γt−1rµt ∣∣∣∣πσ,θµ ]︸ ︷︷ ︸ (B) −βE [ |τ |∑ t=1 D ( fµ(µ), fH(τt) ) ] ︸ ︷︷ ︸ (C) (3) Speeding Up the Learning. The optimization of (B) in Eq. 3 produces a reward-driven latent representation of the task through fµ. In order to encourage the history-based policy to predict a task embedding close to the one predicted by the informed policy, we augment the objective with an auxiliary loss (C) weighted by β > 0. D is the squared 2-norm in our experiments. Note that because we treat the objective (C) as an auxiliary loss, only the average gradient of D with respect to fH is backpropagated, ignoring the effect of fH on πH . The expectation of (C) is optimized over trajectories generated using πω,θH and π σ,θ µ , respectively used to compute (A) and (B). Optimization. IMPORT is trained using Advantage Actor Critic (A2C) (Mnih et al., 2016) with generalized advantage estimation (GAE) (Schulman et al., 2015). There are two value functions1, one for each objective (A) and (B). The algorithm is summarized in Alg. 1. Each iteration collects a batch of M transitions using either πH or πµ.2 If the batch is sampled according to πH , we update with A2C-GAE the parameters of the policy ω and θ according to both objectives (A) and (C), as well as the parameters of the value function associated to objective (A). If the batch is sampled according to πµ, we update with A2C-GAE the parameters of the policy σ and θ according to both objectives (B) and (C), as well as the parameters of the value function associated to objective (B). 5 EXPERIMENTS We performed experiments on five environments. The CartPole and Acrobot environments from OpenAI Gym (Brockman et al., 2016), where the task descriptor µ represents parameters of the physical system, e.g., the weight of the cart, the size of the pole, etc. The dimension of µ is 5 for Cartpole and 7 for Acrobot. The entries of µ are normalized in [−1, 1] and sampled uniformly. These environments provide basic comparison points where the optimal exploration/exploitation policy is relatively straightforward, since the dynamics can be inferred from a few actions. The Bandit environment is a standard Bernoulli multi-armed bandit problem with K arms. The vector µ ∈ RK denotes the probability of success of the independent Bernoulli distributions. Each dimension of µ is sampled uniformly between 0 and 0.5, the best arm is randomly selected and associated to a probability of 0.9. An episode is 100 arm pulls. At every timestep the agent is allowed to pull an arm 1In our implementation, the value network is shared and takes as an input either fµ(µ) or fH(τt). 2In practice, data collection is multithreaded. We collect 20 transitions per thread with 24 to 64 threads depending on the environment, based on available GPU memory in [1, K] and observes the resulting reward. Although relatively simple, this environment assesses the ability of algorithms to learn nontrivial probing/exploitation strategies. The Tabular MDP environment is a finite MDP with S states and A actions such that the transition matrix is sampled from a flat Dirichlet distribution, and the reward function is sampled from a uniform distribution in [0, 1] as in Duan et al. (2016). In that case, µ is the concatenation of the transition and the reward functions, resulting in a vector of size S2A + SA. This environment is much more challenging as µ is high-dimensional, there is nearly complete uncertainty on the task at hand and each task is a reinforcement learning problem. Finally, the Maze 3D environment is a 3D version of the toy problem depicted in Fig. 1, implemented using gym-miniworld (Chevalier-Boisvert, 2018). It has three discrete actions (forward, left, right) and the objective is to reach one of the two possible goals (see Figure 15 in appendix), resulting in a reward of +1 (resp. −1) when the correct (resp. wrong) goal is reached. The episode terminates when the agent touches a box or after 100 steps. The agent always starts at a random position, with a random orientation. The information about which goal to reach at each episode is encoded by the use of two different textures on the wall located at the opposite side of the maze w.r.t. the goals. This domain allows to evaluate the models when observations are high dimensional (3 × 60 × 60 RGB images). The maximum episode length is 100 on CartPole, Bandit, Tabular-MDP and Maze3D, and 500 on Acrobot. To evaluate the ability of IMPORT and the baselines to deal with different types of task descriptors µ, we also perform experiments on CartPole and Tabular-MDP in the setting where µ is only a task identifier (i.e., a one-hot vector representing the index of the training task) which is a very weak supervision available at train time. We compare to previously discussed baselines. First, a vanilla RNN policy (Heess et al., 2015) using GRUs that never uses µ. Second, we compare to TS, TI and AuxTask, with µ only observed at train time, similarly to IMPORT. For TS, at train time, the policy conditions on the true µ, whereas at test time, the policy conditions on an estimated µ̂ resampled from the posterior every k steps where k ∈ {1, 5, 10, 20}. On bandits, UCB (Auer, 2002) with tuned exploration parameters is our topline. Implementation details Contrarily to IMPORT, TS, TI and AuxTask are based on maximizing the log-likelihood of µ. When using informative task descriptors (i.e. a vector of real values), the log-likelihood uses a Gaussian distribution with learnt mean and diagonal covariance matrix. For the bandit setting, we have also performed experiments using a beta distribution which may be more relevant for this type of problem. When using task identifiers, a multinomial distribution is used. All approaches are trained using A2C with Generalized Advantage Estimation (Mnih et al., 2016; Schulman et al., 2015). The precise values of the hyper-parameters and architectures are given in Appendix B.2. All approaches use similar network architectures with the same number of hidden layers and units. Evaluation The meta-learning scenario is implemented by sampling N training tasks, N validation tasks and 10, 000 test tasks with no overlap between task sets (except in Maze3D where there is only two possible tasks). Each sampled training task is given a unique identifier. Each model is trained on the training tasks, and the best model is selected on the validation tasks. We report the performance on the test tasks, averaged over three trials with different random seeds, corresponding to different sets of train/validation/test tasks. Training uses a discount factor, but for validation and test, we compute the undiscounted cumulative reward on the validation/test tasks. The learning curves show test reward as a function of the environment steps. They are the average of the three curves associated to the best validation model of each of the three seeds used to generate different tasks sets. Overall performances. IMPORT performs better than its competitors in almost all the settings. For instance, on CartPole with 10 tasks (see Table 1), our model reaches 94.4 reward while TI reaches only 91.5. Qualitatively similar results are found on Acrobot (Table 5 in Appendix), as well as on Bandit with 20 arms (Table 3), even though AuxTask performs best with only 10 arms. IMPORT particularly shines when µ encodes complex information, as on Tabular-MDP (see Table 2) where it outperforms all baselines in all settings. By varying the number of training tasks on CartPole and Acrobot, we also show that IMPORT’s advantage over the baselines is larger with fewer training tasks. In all our experiments, as expected, the vanilla RNN performs worse than the other algorithms. Sample Efficiency. Figure 5 shows the convergence curves on CartPole with 10 and 100 training tasks and are representative of what we obtain on other environments (see Appendix). IMPORT tends to converge faster than the baselines. We also observe a positive effect of using the auxiliary loss (β > 0) on sample efficiency, in particular with few training tasks. Note that using the auxiliary loss is particularly efficient in environments where the final policy tends to behave like the informed on. Influence of µ. The experiments with uninformative µ (i.e., task identifiers) reported in Table 1 and 2 for CartPole and Tabular-MDP respectively show that the methods are effective even when the task descriptors do not include any prior knowledge. In the two cases, IMPORT can use these tasks descriptors to generalize well. Moreover, experimental results on CartPole (Fig. 11) and Tabular MDP (Fig. 17) suggest that when µ is a vector of features (and not a task identifier only) , it improves sample efficiency but does not change the final performance. This can be explained by the fact that informed policies are faster to learn with features in µ since, in that case, µ is capturing similarities between tasks. Equivalent performance of IMPORT on both types of task descriptors is observed and shows that our method can deal with different (rich and weak) task descriptors. We further analyze the impact of the encoding of µ on the models, by using non-linear projections of the informative µ to change the shape of the prior knowledge. Figure 5c shows the learning curves of TI and IMPORT on CartPole with task identifiers, the original µ and polynomial expansions of µ of order 2 and 3, resulting in 21 and 56 features. IMPORT’s task embedding approach is robust to the encoding of µ, while TI’s log-likelihood approach underperforms with the polynomial transformation. Task embeddings. To have a qualitative assessment of the task embedding learnt by IMPORT, we consider a bandit problem with 10 arms and embedding dimension 16. Figure 6 shows the clusters of task embeddings obtained with t-SNE (Maaten & Hinton, 2008). Each cluster maps to an optimal arm, showing that IMPORT structures the embedding space based on the relevant information. In K = 10 K = 20 IMPORT 77.5(0.2) 56.6(0.1) AuxTask (Gaussian) 78.7(0.4) 50.5(1.6) AuxTask (Beta) 78.2(0.7) 37.1(0.6) RNN 73.6(0.7) 32.1(1.2) TI (Gaussian) 73.7(1.6) 41.4(2.4) TI (Beta) 79.5(0.1) 53.3(2.4) TS (Gaussian) 50.4(0.4) 38.8(2.0) TS (Beta) 41.3(1.5) 36.3(1.1) UCB 78.5(0.3) 68.2(0.4) Table 3: Bandits performance for K = 10 and K = 20 arms, with N = 100 training tasks. Figure 6: Task embeddings learnt on Bandit (10 arms). Colors indicate the best arm. addition, we have studied the influence of the β hyperparameter from Eq. 3 (in Fig. 4 and Section D). It shows that the auxiliary loss helps to speed-up the learning process, but is not necessary to achieve great performance. High dimensional input space. We show the learning curves on the Maze3D environment in Figure 5d. IMPORT is succeeding in 90% of cases (reward ≈ 0.8), while TI succeeds only in 70% of cases. This shows that IMPORT is even more effective with high-dimensional observations (here, pixels). IMPORT and TI benefit from knowing µ at train time, which allows them to rapidly identify that the wall texture behind the agent is informative, while the vanilla RNN struggles and reaches random goals. TS is not reported since this environment is a typical failure case as discussed in Fig.1. Additional results. In Appendix C.1, we show that IMPORT outperforms TI by a larger margin when the task embedding dimension is small. We also show that IMPORT outperforms its competitors in dynamic environments, i.e., when the task changes during the episode. 6 CONCLUSION We proposed a new policy architecture for meta reinforcement learrning. The IMPORT model is trained only on the reward objective, and leverages the informed policy to discover effective trade-offs between exploration and exploitation. It is thus able to learn better strategies than Thompson Sampling approaches, and faster than recurrent neural network policies and Task Inference approaches. A THE IMPORT ALGORITHM The algorithm is described in details in Algorithm 2. In our implementation, the value function network used for (A) and (B) is the same, i.e. shared. We specialize the input, i.e. for (A) the input will be (st, fH(τt)) and (st, fµ(µt)) for (B). Algorithm 2 Details of IMPORT Training Initialize σ, ω, θ, ν arbitrarily Hyperparameters: Number of iterations K, Number of transitions per update steps M , discount factor γ, GAE parameter γGAE , Adam learning rate η, weighting of the (C) objective β, weighting of the entropy objective λh, weighting of the critic objective λc Optim = Adam(η) for k = 1, . . . ,K do if k is odd then Collect M transitions according to πH in buffer BH . else Collect M transitions according to πµ in buffer Bµ. end if δσ, δω, δθ = 0, 0, 0 Rµ ← compute gae returns(Bµ, γGAE) RH ← compute gae returns(BH , γGAE) δθ,ω += 1 |BH | ∑ b∈BH ∑T t=1[R µ,b t − Vν(sbt , zbt )]∇θ,ω log πH(abt |sbt , zbt ) δθ,ω += λh |BH | ∑ b∈BH ∑T t=1∇θ,ωH ( πH(a b t |sbt , zbt ) ) δω −= 2β|BH | ∑ b∈BH ∑T t=1[f ω H(s b t , z b t )− fµ(sbt , µbt)]∇ωfωH(sbt , zbt ) δν −= 2λc|BH | ∑ b∈BH ∑T t=1[R H,b t − Vν(sbt , zbt )]∇νVν(sbt , zbt ) δθ,σ += 1 |Bµ| ∑ b∈Bµ ∑T t=1[R H,b t − Vν(sbt , µbt)]∇θ,σ log πµ(abt |sbt , µbt) δθ,σ += λh |Bµ| ∑ b∈Bµ ∑T t=1∇θ,σH ( πµ(a b t |sbt , µbt) ) δν −= 2λc|Bµ| ∑ b∈Bµ ∑T t=1[R µ,b t − Vν(sbt , µbt)]∇νVν(sbt , µbt) θ ← Optim(θ, δθ) ω ← Optim(ω, δω) σ ← Optim(σ, δσ) ν ← Optim(ν, δν) end for B IMPLEMENTATION DETAILS B.1 DATA COLLECTION AND OPTIMIZATION We focus on on-policy training for which we use the actor-critic method A2C (Mnih et al., 2016) algorithm with generalized advantage estimation. We use a distributed execution to accelerate experience collection. Several worker processes independently collect trajectories. As workers progress, a shared replay buffer is filled with trajectories and an optimization step happens when the buffer’s capacity bs is reached. After model updates, replay buffer is emptied and the parameters of all workers are updated to guarantee synchronisation. B.2 NETWORK ARCHITECTURES The architecture of the different methods remains the same in all our experiments, except that the number of hidden units changes across considered environments and we consider convolutional neural networks for the Maze3d environment. A description of the architectures of each method is given in Fig. 2. Unless otherwise specified, MLP blocks represent single linear layers activated with a tanh function and their output size is hs. All methods aggregate the trajectory into an embedding zt using a GRU with hidden size hs. Its input is the concatenation of representations of the last action at−1 and current state st obtained separately. Actions are encoded as one-hot vectors. When episodes begin, we initialize the last action with a vector of zeros. For bandits environments, the current state corresponds to the previous reward. TS uses the same GRU architecture to aggregate the history into zt. All methods use a softmax activation to obtain a probability distribution over actions. The use of the hidden-state zt differs across methods. While RNNs only use zt as an input to the policy and critic, both TS and TI map zt to a belief distribution that is problem-specific, e.g. Gaussian for control problems, Beta distribution for bandits, and a multinomial distribution for Maze and CartPole-task environments. For instance, zt is mapped to a Gaussian distribution by using two MLPs whose outputs of size |µ| correspond to the mean and variance. The variance values are mapped to [0, 1] using a sigmoid activation. IMPORT maps zt to an embedding fH , whereas the task embedding fµ is obtained by using a tanh-activated linear mapping of µt. Both embeddings have size hsµ, tuned by cross-validation onto a set of validation tasks. The input of the shared policy head φ is the embedding associated with the policy to use, i.e. either fH when using πH or fµ when using fµ. For the Maze3d experiment and in all methods, we pre-process the pixel input st with three convolutional layers (with output channels 32, stride is 2 and respective kernel sizes are 5, 5 and 4) and LeakyReLU activation. We also use a batch-norm after each convolutional layer. The output is flattened, linearly mapped to a vector of size hs and tanh-activated. C EXPERIMENTS In this section, we explain in deeper details the environments and the set of hyper-parameters we considered. We add learning curves of all experiments to supplement results from Table 1, 2, 3 and 5 in order to study sample efficiency. Task descriptor. Note that for CartPole and Acrobot µ is normalized to be in [−1, 1]D where D is the task descriptor dimension. The task distribution q is always uniform, see the description of the environments for details. For experiments with task identifiers, we associate to each sampled task an integer value corresponding to the order of generation, and encode it usong a one-hot vector. Hyperparameters. Hyperparameter ranges are specified in Table 4. For TS, we consider sampling µ from the posterior dynamics distribution every k steps with k ∈ {1, 5, 10, 20}. C.1 CARTPOLE. We consider the classic CartPole control environment where the environment dynamics change within a set M (|µ| = 5) described by the following physical variables: gravity, cart mass, pole mass, pole length, magnetic force. Their respective pre-normalized domains are [4.8, 14.8], [0.5, 1.5], [0.01, 0.19], [0.2, 0.8], and [−10, 10]. The value of µ are uniformly sampled. Knowing some components of µ might not be required to behave optimally. The discrete action space is {−1, 1}. Episode length is T = 100. Final performance and sample efficiency. Table 1 shows IMPORT’s performance is marginally superior to other methods in most settings. Learning curves in Figure 7 allow analyzing the sample efficiency of the different methods. Overall, IMPORT is more sample efficient than other methods in the privileged information µ setting. Moreover, the use of the auxiliary loss (β > 0) usually speed-up the learning convergence by enforcing the RNN to quickly produce a coherent embedding. We can see that only sharing parameters (β = 0) already helps improving over RNNs. Non-stationary environments. We consider the non-stationary version of CarPole environment where at each timestep, there is a probability ρ = 0.05 to sample a new dynamic µ. Table 8 shows that the performance of IMPORT, AuxTask and TI are comparable in these settings. Size of built embeddings. We now study the impact of the task embedding representation size. As can be seen from Figure 10, IMPORT’s performance remains stable for different representation sizes in {2, 4, 8, 16} whereas TI’s sample efficiency decreases with this dimension. Trajectory and task embeddings. In Figure 11, we plot both the evolution of fH(τt) during an episode of the final model obtained training IMPORT with two-dimensional task embeddings on CartPole with task identifiers (left) and task embedding fµ(µ) learnt by the informed policy (right). As expected, the history embedding gets close to the task embedding after just a few timesteps (left). Interestingly, task embeddings fµ(µ) are able to capture relevant information from the task. For instance, they are highly correlated with the magnetic force which is a very strong factor to “understand” from each new environment to control the system correctly. At the opposite, gravity is less correlated since it does not influence the optimal policy – whatever the gravity is, if the pole is on the left, then you have to go right and vice-versa. Acrobot consists of two joints and two links, where the joint between the two links is actuated. Initially, the links are hanging downwards, and the goal is to swing the end of the lower link up to a given height. Environment dynamics are determined by the length of the two links, their masses, their maximum velocity. Their respective pre-normalized domains are [0.5, 1.5], [0.5, 1.5], [0.5, 1.5], [0.5, 1.5], [3π, 5π] and [7π, 11π]. Unlike CartPole, the environment is stochastic because the simulator applies noise to the applied force. The action space is {−1, 0, 1}. We also add an extra dynamics parameter which controls whether the action order is inverted, i.e. {1, 0,−1}, thus |µ| = 7. Episode length is 500. IMPORT outperforms all baselines in settings with small training task sets (Figure 12 and Table 5) and perform similarly to TI on larger training task sets. C.3 BANDITS The Bandit environment is a standard Bernoulli multi-armed bandit problem with K arms. The vector µ ∈ RK denotes the probability of success of the independent Bernoulli distributions. Each dimension of µ is sampled uniformly between 0 and 0.5, the best arm is randomly selected and associated to a probability of 0.9. Although relatively simple, this environment assesses the ability of algorithms to learn nontrivial exploration/exploitation strategies. Note that it is not surprising that UCB outperforms the other algorithms in this setting. UCB is an optimal algorithm for MAB and we have optimized it for achieving the best empirical performance. Moreover, IMPORT cannot leverage correlations between tasks since, due to the generation process, tasks are independent. We visualize the task embeddings learnt by the informed policy in 13. C.4 MAZE3D ENVIRONMENT The Maze 3D environment (Figure 15) is a continuous maze problem implemented using gymminiworld (Chevalier-Boisvert, 2018), with 3 discrete actions (forward, left, right) where the objective is to reach one of the two possible goals, resulting in a reward of +1 (resp. −1) when the correct (resp. wrong) goal is reached. If a box is touched, the episode ends. The maze’s axis range from -40 to 40, the two turn actions (left, right) modify the angle by 45 degrees, and the forward action is a 5 length move. The agent starts in a random position with a random orientation. The information about which goal to reach at each episode is encoded by the use of two different textures on the wall located on the opposite side of the boxes. In this way, the agent cannot simultaneously observe both boxes and the “informative” wall. This environment allows to evaluate the models in a setting where the observation is a high dimensional space (3x60x60 RGB image). The mapping between the RGB image and the task target in {−1, 1} is challenging and the informed policy should provide better auxiliary task targets than TI thanks to the “easy” training of the informed policy. IMPORT outperforms TI on this environment (Figure 16) in both final performance and sample efficiency. C.5 TABULAR MDPS Tabular MDP (Duan et al., 2016) is a MDP with S discrete states andA actions such that the transition matrix is sampled from a flat Dirichlet distribution, and the reward function is sampled from a uniform distribution in [0, 1]. The task identifier µ is a concatenation of the transition and reward functions resulting in a vector of size S2A+ SA, allowing to test the models with high-dimensional µ. IMPORT outperforms all baselines in all settings (Figure 17 and Table 2). D IMPACT OF THE β HYPERPARAMETER We study the sensibility of the β parameter on IMPORT. Figure 18 clearly shows the benefits of using the auxiliary objective. On all but the Tabular-MDP environments, the recurrent policy successfully leverages the auxiliary objective to improve both sample efficiency and final performance for Acrobot.
1. What is the main contribution of the paper regarding multi-task learning? 2. What are the strengths of the proposed method, particularly in its novelty and experimental results? 3. Do you have any questions or concerns about the paper's content, such as the experiment procedure, figure 4, page 7, equation 3, table 1, and sampling actions at the initial step?
Review
Review This paper presents a method that leverage task descriptors for multi-task learning. In the proposed method, an informed policy that takes in the task descriptor and the state is trained to maximize the expected return. At the same time, a RNN policy based on the history of states and actions is trained such that the RNN layers imitates the behavior of the feature extraction layers of the informed policy. In this way, the RNN policy is trained as if the task description is available. The experimental results show that the proposed method outperforms baseline methods that leverages the tasks descriptor. The proposed method seems novel and the experimental results show its benefits. However, there are some unclear points. Especially, “online adaptation” described in the introduction is not clear. I would like to ask the authors to clarify the following points: The experiment procedure is not clear to me. I understand that the informed policy and the RNN policy are trained on training tasks, but I’m not sure how the policy is adapted for test tasks. Both informed and RNN policies are further trained on test tasks? Does Figure 4 shows the learning curve during the training on the training tasks? If so, I recommend to add the learning curve on the test tasks to show the performance of adaptation. In page 7, I do not clearly understand this sentence: “Each model is trained on the training tasks, and the best model is selected on the validation tasks.” Were several models trained on training tasks? If so, how many models were trained? If I understand correctly, the policy is trained to maximized the expected return across the training tasks. If so, for clarify, I recommend to describe the expectation explicitly in Eq. (3), e.g, E_{\mu \sim p(\mu)} [ E_{s \sim p(s’|s, a), a \sim \pi(a|s)} [ … ] ] Caption of table 1 “Note that RNN does not \mu at train time.” <- something is wrong? How to sample action at the initial step in the test tasks? The RNN policy seems to require the previous action a_{t-1} to generate actions, but a_{t-1} is not available in the first time step.
ICLR
Title On Pre-training Language Model for Antibody Abstract Antibodies are vital proteins offering robust protection for the human body from pathogens. The development of general protein and antibody-specific pre-trained language models both facilitate antibody prediction tasks. However, there have been limited studies that comprehensively explore the representation capability of distinct pre-trained language models on different antibody tasks. To investigate the problem, we aim to answer several key questions in this paper, such as how pre-trained language models perform in antibody tasks with different specificity and how introducing specific biological mechanisms to the pre-training process can benefit the model. Additionally, we evaluate if the learned antibody pre-trained representations can be applied to real-world antibody problems, like drug discovery and immune process understanding. Previously, no benchmark available largely hindered the study to answer these questions. To aid in our investigation, we provide an AnTibody Understanding Evaluation (ATUE) benchmark. We comprehensively evaluate the performance of protein pre-trained language models by empirical study along with conclusions and new insights. Our ATUE and code are released at https://github.com/dqwang122/EATLM. 1 INTRODUCTION Antibodies are a type of protein that is useful for diagnosing and treating a variety of diseases, including SARS-CoV-2 (Zhu et al., 2022). It is crucial to understand the information contained in antibody sequences to develop effective therapeutic antibodies and advance our understanding of the immune system (Greiff et al., 2020; Lu et al., 2018; Yermanos et al., 2018). Recent advances in general Pre-trained Protein Language Models (PPLM) and specific Pre-trained Antibody Language Models (PALM) offer new possibilities for antibody-related tasks. For example, PPLMs have shown promising results in transferring learned representations to antibody tasks (Kim et al., 2021; Zaslavsky et al., 2022) and PALMs have been found to improve model performance in antibody paratope predictions (Leem et al., 2022). Despite these successes, few studies have thoroughly examined the capability of different pre-trained language models (e.g. general PPLMs and specific PALMs) on various antibody tasks, which hinders the development of better architectures for antibody discovery and modification. To investigate this problem, we compared the performance of the pre-trained protein language model ESM (Rives et al., 2021), the pre-trained antibody language model AntiBERT (Leem et al., 2021), a pre-trained antibody language model EATLM by introducing antibody specific mechanisms, and a model trained from scratch (No Pretrain) on three antibody tasks with varying levels of specificity. The result is illustrated in Figure 1. Here, ∗Work was done when Danqing Wang was in Bytedance Research. specificity refers to the antibody’s unique evolution processes distinct from that of protein to obtain functionality, such as the ability to bind antigen (The definition is discussed in detail in §3.1). We can see that while ESM performs well in tasks that are less antibody specific, its performance decreases significantly in tasks that are more specific. Additionally, AntiBERT does not demonstrate a clear advantage over the non-pre-trained model in the high-specificity task. These results highlight the limitations of current pre-training language models for antibody-related studies. Using general PPLM representations directly may harm performance, and current pre-training strategies for PALMs may not fit the specific biological functions of antibodies. This emphasizes the need for a comprehensive model design guideline for various antibody tasks. Our main focus is to address the following questions: (I) How well will pre-trained language models perform on antibody tasks with varying specificity? Addressing of the question is mainly hindered by two challenges: the lack of a reliable antibodyspecific benchmark for performance evaluation and comprehensive studies of current PPLMs and PALMs. (II) Can incorporating biological mechanisms, specifically antibody-specific evolution, into the pre-training process provide additional benefits for antibody representation learning? This idea has been explored in several computational biology studies, which have demonstrated promising results in antibody-related tasks such as disease diagnosis and therapeutic antibody development (Yermanos et al., 2018; Miho et al., 2019). Then, it is interesting to know whether antibody representation learning can benefit from the incorporation of antibody-specific evolution information. (III) Are the pre-trained antibody representations useful in practical applications, such as drug discovery and immune process understanding? Antibodies are critical in drug development, and it is essential to determine whether pre-training representations can be beneficial for biologists to comprehend antibody functions or develop drugs. To investigate these questions, we first propose antibody study benchmark AnTibody Understanding Evaluation (ATUE). This is the first antibody benchmark with four real-world supervised tasks related to therapeutic antibody engineering, B cell analysis, and antibody discovery. These tasks cover a range of specificity levels to evaluate models on different aspects of antibody biological functions. Based on ATUE, we conduct empirical studies to investigate the representation ability of distinct pre-trained language models. To explore the impact of incorporating specific biological mechanisms in antibody pre-training, two objectives are introduced to tailor masked language modeling for evolution: (1) Ancestor germline prediction guides the model to discriminate the evolutionary relationship between antibody and ancestral sequences. (2) Mutation position prediction mimics hypermutation during the evolution. These methods are used to investigate the representation ability of antibody evolution-tailored language model. Finally, we take a close look at the SARS-CoV-2 antibody discovery to investigate the pre-trained representation under a real-world scenario. We have three main contributions in this study: • We created the first comprehensive antibody benchmark called ATUE to help with antibody application studies, which includes four real-world supervised tasks ranging from low to high specificity. We also introduce two new objectives for antibody pretraining that incorporate antibody-specific evolutionary information. • We made key observations for providing guidelines for better antibody representation. Firstly, PPLMs perform well on antibody tasks that have a high relationship with structure, but they perform poorly on tasks with high antibody specificity. Secondly, in most cases, PALMs perform as well as or even better than PPLMs with less pre-training data. Thirdly, PALMs can be improved by incorporating the evolution process, but the evolution information from MSAs does not always benefit antibody tasks. • We identified 11 potential SARS-CoV-2 binders that have highly identical sequences to existing therapeutic antibodies that bind to the virus, which could accelerate real-world antibody discovery. 2 RELATED WORK Our work focuses on researching the effectiveness of protein and pre-trained antibody language models for antibody-specific tasks. Below we review the representative existing methods. We list the details in Table 1. Pretrained Protein Language Models (PPLMs) There is an increasing interest in exploring largescale language models using protein sequences (Rao et al., 2019; Madani et al., 2020; Meier et al., 2021; Chen et al., 2022). These models have been shown to achieve state-of-art capacity in predicting protein structure and function. ProtTrans (Elnaggar et al., 2021) and ESM-1b (Rives et al., 2021) take individual protein sequences as input and adopt Transformer language models for pre-training, demonstrating that self-supervision is a promising paradigm for protein secondary structure, contact, homology predictions, and function prediction. To extract evolutionary information from protein sequences, Rao et al. (2021) proposed the MSA-transformer/MSA-1b model utilizing multiple sequence alignment (MSA) instead of a single query sequence as input. This model is superior to ESM1b for structure prediction, demonstrating evolution information can benefit protein representation learning. Despite the progress in the field, few studies reported their results on antibody tasks. Pretrained Antibody Language Models (PALMs) Encouraged by the success of PLMs in protein representation learning, series work seeks to learn antibody representations based on sequences of antibodies. AntiBERTy (Ruffolo et al., 2021) proposed the first antibody-specific language model, exploring a Transformer trained on 558M natural antibody sequences in the OAS database. Olsen et al. (2022b) train two language models for antibodies: A heavy chain version Ablang-H and a light chain version Ablang-L. The study reported transfer learning results on restoring missing residues of antibody sequences, which is a task similar to pre-training objectives. AntiBERTa (Leem et al., 2021) train the antibody language model on OAS and finetuning AntiBERTa for paratope position prediction, achieving state-of-the-art performance. Recently, Li et al. (2022) proposed an antibodyspecific language model and explored its performance in SARS-CoV-2 antigen binding, showing context-dependent representations of antibody sequences benefit binding prediction. 3 FRAMEWORK In this section, we first give a brief introduction to the antibody and its specific evolution. Then we propose the first antibody-specific benchmark (ATUE) composed of four tasks with different specificities. Finally, we implement several PPLMs and PALMs baselines and design an evolutionaware PALM to incorporate the biological mechanism into the pre-training process. 3.1 BACKGROUND Antibody Antibodies are vital proteins generated by the immune system to remove harmful foreign pathogens in the human body. they can specifically bind to antigens on the pathogen and recognize it. Antibodies are composed of two identical heavy chains and two identical light chains and form a large Y-shaped structure. Two tips on it contain highly variable loops, called Complementarity Determining Regions (CDR), which function for antigen binding. Antibody Specific Evolution Notably, the antibody evolution process is significantly different from that of proteins, providing a good opportunity for us to investigate the impact of general PPLMs on specific subdomains. To perform its protective function, the antibody sequence undergoes evolution selection to search for optimal patterns that can specifically recognize pathogens (Honjo & Habu, 1985). Deciphering the information stored in antibody sequences may benefit our understanding of disease and accelerate therapeutic antibody development (Greiff et al., 2020; Lu et al., 2018; Yermanos et al., 2018). During evolution, the random recombination of V/D/J-gene segments provides the Task Specificity HighLow initial diversity for the ancestor sequence (germline). Upon exposure to a pathogen, this sequence undergoes frequent sequence mutations to search for progeny antibody sequences with optimal binding specificity. In other words, gene recombination provides millions of germlines in the human body, and the germlines further mutate into a huge number of progeny antibodies. Thus, the ancestor relationship between an antibody and its corresponding germline as well as the mutation it undergoes together determine the unique biological functions. In brief, the evolutionary relationships between antibodies arise to gain new functions such as antigen binding. It is significantly different from that of proteins, which are to maintain certain functions across different organisms. We further illustrate this process in Figure 7 in §A.1. Unsupervised Antibody Corpus To obtain the evolutionary information of antibody sequences, we utilize Observed Antibody Space (OAS), a database containing more than 1.5 billion natural antibody sequences (Kovaltsuk et al., 2018; Olsen et al., 2022a) The antibody sequences in the database have been precisely annotated with evolutionary and structural information, including the paired germline and CDR3 for each antibody. To pair the antibody with its germline used in the pretraining task, we used the annotated sequences provided in the OAS database. Further information on data processing can be found in §A.2. 3.2 ANTIBODY UNDERSTANDING EVALUATION (ATUE) We provide four biologically relevant downstream prediction tasks to serve as antibody benchmarks, covering four major application aspects for antibodies in the real world: therapeutic antibody engineering, disease diagnostics, antibody discovery, and B cell maturation analysis. The antibody specificity of these tasks ranges from low to high, offering scaled tasks with subdomain specificity for pre-trained language model evaluation. Detailed information is listed in Figure 2. All data are publicly open and used under the right license. For each task, we focus on the following aspects and leave the details in Appendix (§A.3 and §A.4): [Definition] The formal definition of the task and the understanding ability required. [Impact] The importance of the task in the biological area. [Dataset] The data source and size. [Specificity] Antibody’s specific evolution characteristics are different from general proteins. We use several classification metrics to evaluate the performance. Accuracy (ACC) calculates the ratio of correct predictions. Matthews Correlation Coefficient (MCC) is the coefficient between true and predicted values. F1 is the average weighted score of precision and recall. AUC is the area under the ROC curve, which shows the performance at all classification thresholds. Antigen Binding Prediction is a binary sequence classification task to determine whether the CDR region of the antibody can bind to the specific antigen. [Impact] A better understanding of the binding affinity between antibodies and antigens can accelerate the affinity optimization of therapeutic antibodies. [Dataset] We collect the antigen binding data from Mason et al. (2021) and follow the training/validation/test split of 15,128/3,242/3,242. [Specificity] Low. All the antibodies sequence in the dataset are derived from a single germline sequence indicating the task is not antibody-specific evolution-related. Paratope Prediction It is to identify binding positions on the antibody sequence, which is a sequence labeling task to predict a 0/1 label for each residue of CDR fragments. [Impact] The exploration of paratope (binding positions between antibody and antigen) can help to understand the binding mechanisms of therapeutic antibodies. [Dataset] The paratope data is collected from Liberis et al. (2018) with 1,662 CDR segments on 277 antibodies. [Specificity] This task is medium specificity related because only partial antibodies from the database are derived from evolution. B Cell Maturation Analysis It is a 6-category classification task to distinguish the maturation stage of B cell antibody sequences. Each sequence belongs to one of {immature, transitional, mature, plasmacytes, memory IgD+, memory IgD-}. It requires the model to learn a representation sensitive to different maturation states. [Impact] It benefits the understanding of the mechanism during immune evolution, which is a critical biological process in the immune system affecting the function and antigen specificity of antibodies (Ghraichy et al., 2021; Meffre et al., 2000). [Dataset] We collect 88,094 sequences from Mroczek et al. (2014) with 6 maturation stages. [Specificity] High. Antibody evolution is highly coupled with B cell maturation (Meffre et al., 2000). Antibody Discovery The task is a binary sequence classification task to distinguish which antibody is directly responsible for SARS-CoV-2 binding. The task is highly challenging from two aspects: (1) Less than 1% of antibodies from SARS-CoV-2 patients are directly responsible for virus binding. (2) It is hard to get a reliable sequence-level classifier using unreliable and noisy individual-level labels. [Impact] Antibody discovery from B cell repertoire has been widely recognized as a important approach to accelerate antibody discovery for diverse antigens (Weiner, 2015; Pedrioli & Oxenius, 2021), and achieved great success for SARS-CoV-2 antibody discovery (Kovaltsuk et al., 2018; Cao et al., 2020; Shiakolas et al., 2022). [Dataset] We collected antibody sequences from 133 SARS-CoV-2 patients and 87 health persons from OAS and followed the processing pipeline of Kim et al. (2021). Inspired Zaslavsky et al. (2022), we match the high-ranked sequences with the sequences in the CoVAbDab (Raybould et al., 2021) database, which have been proved to bind SARS-CoV-2 using wet-lab experiments. [Specificity] High. It is widely reported antibodies derived from the same disease such as SARS-CoV-2 share strong convergent germline signals (Galson et al., 2020). 3.3 EXPERIMENT SETUP Based on the antibody benchmark ATUE, we evaluate the performance of current pertaining language models in different specificity tasks. Furthermore, to investigate the benefit of introducing the biological mechanism, we incorporate evolution information as the extra pretraining objectives for PALMs and propose EATLM. The detailed description of the objective and the implementation can be found in §A.5 Current Pre-trained language models Existing antibody and protein language models are summarized in Table 1. Since the code and pre-training data of AntiBERTa are not released, we train a BERT model named AntiBERT on the full OAS database following the same setting as the original study. MSA-1b (Rao et al., 2021) takes protein-specific evolutionary sequences (Multiple Sequence Alignment, MSA) as the input. Because it is hard to align sequences between antibodies due to the diversity of CDR3, we take the germline and create pseudo-MSAs with depth 2. We add a linear layer on top of the language models and finetune the whole model on the downstream tasks. Evolution-aware antibody pretraining method To incorporate the biological mechanism into the pre-training, we propose a model with evolution information: Antibody EvoluTion-aware pretraining Language Model. The antibody can be represented as A and the germline of the individual antibody can be represented as G. Typically, PALMs are trained with basic masked language modeling (MLM). Based on it, we design another two pre-training objectives to simulate the biological mechanism of antibody evolution. The evolutionary relationship between the antibody and its germline includes two folds: (i) Whether the antibody and the germline have an evolutionary relationship. (ii) How to mutate residues from the germline to get the specific antibody. Two evolution-related objectives are introduced to solve the above questions: Ancestor Germline Prediction (AGP) and Mutation Position Prediction (MPP). For ancestor germline prediction, we substitute the paired germline G with random germline G′ in the batch via a probability p. The model is made to distinguish the ancestor germline of the antibody by capturing the shared features. To predict mutation position, for each token in the germline G, the objective is to predict a 0/1 label for each token to indicate whether this token has been mutated. For the antibody sequence S, we mask the mutation position and predict these tokens. Hyper-parameters We use the base Transformer architecture (Vaswani et al., 2017) with 12 layers, 12 heads, and 768 hidden states. For each task in ATUE, we finetune the model with supervised data. We follow the standard split of Antigen Binding Prediction. For other tasks that do not provide a standard split, we use a 10-fold cross-validation. Since our pre-training model learns the representation of the antibody sequence, we expand the CDR fragment to the full antibody by searching the biological database for therapeutic antibody engineering tasks. We also use the same Transformer architecture to train from scratch for each downstream task. This model is indicated as non-pretrain since it is not pre-trained on a protein/antibody database. Reproduction We conduct 10-fold validation on paratope prediction, B cell maturation analysis, and antibody discovery. For antigen binding prediction, we conduct three repetitive experiments with different random seeds. We report the average results and the standard derivation. 4 RESULTS AND ANALYSIS In this section, we present the experimental results and analysis for the representation capability of existing PPLMs, PALMs, and the EATLM method with evolutionary incorporation, using ATUE benchmark. Additionally, we summarize our observations aiming to address the problems highlighted in the introduction. 4.1 MAIN RESULTS Antigen binding We evaluate the performance PLMs models for antibody binding and paratope prediction, which are less antibody specific. The results in Table 2 indicate that PPLMs and PALMs perform similarly on these tasks, suggesting that PALMs can learn comparable general protein representations to PPLMs. Among different PALMs, Ablang-H outperforms Ablang-L and AntiBERT. It indicates that separate training for heavy and light chain sequences is beneficial for these tasks. Moreover, the introduction of AGP and MPP provides improvement over AUC and F1 metrics. Paratope prediction The results presented in Table 2 demonstrate that for paratope prediction, both PPLMs and PALMs can significantly boost the prediction accuracy over the model with pretraining. However, PALMs do not exhibit a significant advantage over PPLMs. EATLM outperforms other models, particularly in terms of F1 and MCC, while other models exhibit high recall and low precision, indicating that they tend to predict more residues as binding sites. With the incorporation of mutation residue prediction, EATLM can focus on the specific mutated positions adapted to bind with antigen. Among the two PPLMs, MSA-1b outperforms ESM-1 on F1 and MCC, which benefits from the structure information learning from MSAs. B Cell Analysis In this task, we investigate the ability of different pre-trained language models to distinguish between various B cell mature states during evolution. The findings, as demonstrated in Table 2, indicate that PPLMs are not effective in discerning minor differences between B cell sequences, resulting in mediocre results. Both ESM-1 and MSA-1b perform significantly worse than randomly initialized models. MSA-1b, in particular, performs poorly among all pre-trained language models, implying that representations that excel in protein structure prediction may be detrimental to antibody-specific tasks. Conversely, all PALMs show promising results for the task. This may be due to the fact that the general protein has little correlation with the specific antibody mature process and cannot capture this feature during protein pretraining. Our EATLM significantly outperforms the other PALMs. This is because our model can effectively capture the evolution feature and better distinguish between B cells at different stages of maturation by explicitly modeling the biological mechanism. We conduct further analysis to figure out whether our EATLM successfully captures sequence characteristics during the evolutionary process. We explore the probabilities of predicting antibodies in class i to class j. The results shown in Figure 3 reveal EATLM can easily classify the immature B cell with an accuracy of 0.9. It is consistent with the biological study that CDR3 sequence length in immature B cells is significantly shorter than that of the other mature B cells (Ghraichy et al., 2021). From the diagonal, we can figure out that our model tends to mistake the B cell sequences with their previous or post-evolutionary stage, consistent with the biological process. Antibody Discovery We investigated the potential of PPLMs and PALMs in aiding the discovery of antigen-specific antibodies for real-world problems. To achieve this, we followed a two-step process similar to Zaslavsky et al. (2022). Firstly, we created a sequence classifier to differentiate SARS-CoV-2 antibodies using noisy individual-level labels. Secondly, we compared the highly-ranked sequences with true binding sequences in the CoV-AbDab (Raybould et al., 2021) database to determine if there are similarities. We used a 90% sequence identity threshold to determine the likelihood of biological functionality similar to the existing binders. The experimental design for this is outlined in §A.7. Figure 4 shows the cumulative sum of matched sequences in the order of predicted probabilities by different pre-trained language models for the SARSCoV-2 specific antibody discovery task. We can ob- serve that PALMs outperform PPLMs in identifying potential binders, as the sequences predicted with high probability by PALMs match better with the existing binders. Moreover, among PALMs, EATLM significantly outperforms other models, with the red line indicating its performance. Initially, EATLM is the quickest method to find potential binders, but it loses to Ablang-H, and eventually overtakes again and converges. This suggests that EATLM is the most effective method for identifying all potential binders in this dataset. Furthermore, we list 11 potential binder sequences discovered by EATLM in Table 3. Without supervised labels, EATLM gives a high probability of 2 SARS-CoV-2 existing binding antibodies. Besides, EATLM suggests 9 potential sequences with high CDR-H3 sequence identity, indicating the potential for diverse-epitope antibody discovery and selection. These results demonstrate the potential of EATLM in therapeutic antibody discovery. To validate whether the antibody sequences with 90% sequence identity can indeed bind the same target, we investigate the 3D structure of the true binding antibody. Table 4 shows only one single residue difference between the predicted binder and the existing binder, suggesting the predicted binders are highly possible to interact with SARS-CoV-2. No Predicted Binder Existing Binder Epitope Identity SARS‑CoV‑2 NTD 4.2 HOW DOES EVOLUTION PRETRAINING TASK INFLUENCE THE REPRESENTATION? To comprehend the reasons for the better performance of EATLM on antibody-related tasks, we conduct the analysis of the pre-trained representations. The objective of this analysis is to evaluate the effectiveness of the evolution-aware pre-training strategies from two perspectives: (1) Does the pre-trained representation of antibodies reflect their ancestor relationship? (2) Does the specificity of antibodies get captured by the evolution objective? Ancestor Gerlime Visualization We perform UMAP visualization analyses in Figure 5. First, we observe that antibodies evolved from the same germline are nicely clustered together (Figure 5a and 5b), indicating the learned embedding is encoded with germline information. Besides, sequences with similar scales of evolutionary distance tend to cluster together, and a clear gradation of evolutionary distance can be observed in Figure 5c and 5d. The visualization provides a sanity check for the ability of EATLM to extract the sequence information of antibodies. Accuracy of Mutation Position Based on the specific evolution process described in §3.1, we can find the mutation during the evolution process bring specificity to the antibody. Thus, we explore the model’s ability to predict mutated residue from the masked token, which can reflect the specificity feature the model captures. We find that although AntiBERT can predict with an accuracy of 0.889 on all positions, it fails on mutation positions with a 0.031 accuracy. In contrast, EATLM achieves an accuracy of 0.443 on mutation position, which indicates that the model captures the specificity information. Note that during the MPP training, we mask the mutation position on antibody sequences, which are different from its germline. Thus, the model cannot get the mutated residue from the germline directly. The only way is to learn the underlying mutation rules. The full results are shown in Table 8 in Appendix. 4.3 KEY OBSERVATIONS The performance of pre-trained language models is highly dependent on the specificity of the task. In tasks with low antibody-specificity, PPLMs perform similarly to PALMs, indicating that using general protein representations from PPLMs is an effective way to transfer learning in these tasks. On medium specificity tasks such as paratope prediction, PALMs show their advantage and outperform PPLMs. However, for tasks with high specificity, PPLMs have significantly lower performance, suggesting that general pre-trained protein models are insufficient for antibody-specific representation learning. Additionally, incorporating protein evolution information does not always benefit antibody tasks, especially those that require antibody evolution information, as shown by the 20% decrease in performance observed with MSA-1b compared to the model without pre-training. This finding is consistent with the biological understanding that the mechanism of antibody evolution is significantly different from that of proteins. Performa nce Incre ase (%) Task SpecificityLow Medium High 010‑10‑20 20 ‑30 No PretrainESM‑1 AntiBERTEATLM MSA‑1b Ablang‑HAblang‑L Figure 6: Performance summary of various pre-trained language models. Incorporation of biological evolution mechanism into PALM generally benefits antibody prediction tasks. The inclusion of evolution-related training objectives assists in identifying mutation positions on antibodies, which is a distinguishing feature from germline. Notably, the performance increase of EATLM in comparison to other PALMs is linked with the level of task specificity. The ablation study showed removing the evolution-related pretraining objectives leads to decreased performance, confirming their contribution to the prediction task. Further research in this direction is promising and could offer more in-depth insights. Antibody pre-trained representations are helpful for real-world drug discovery. By utilizing the language model, we predict the likelihood of each antibody binding with SARS-CoV-2. Despite lacking precise sequence-level labels, we successfully identify 11 promising antibody binders. 5 CONCLUSIONS AND LIMITATIONS In this paper, we conduct a detailed investigation into the effects of pre-trained protein and antibody language models on various antibody tasks. To facilitate research in the antibody and machine learning fields, we provide ATUE consisting of four important antibody tasks from four different biological categories with varying levels of antibody specificity. However, there are certain constraints to our research. Firstly, due to the scarcity of data, the diversity of tasks in our ATUE is limited. As more data becomes available, we anticipate expanding our benchmark to include a greater range of diseases and larger data sets. Additionally, we did not examine any 3D structure information during antibody pre-training. As antibody structures offer more information than just sequences, such as geometry, incorporating structural information in future studies may lead to improved results. ETHIC STATEMENT This research involving the use of pre-existing data and computational methods did not involve any human or animal subjects, and therefore, no ethical approval was required. The authors followed all applicable ethical standards and guidelines for data analysis and reporting. All data used in this study were obtained from publicly available sources, and proper citation and attribution have been given. The authors have made efforts to ensure that the research presented in this paper does not infringe upon any existing copyrights or intellectual property rights. ACKNOWLEDGEMENT We thank members of ByteDance Research for discussion, Zaixiang Zheng and Yi Zhou for useful writing suggestions. Hao Zhou is supported by Vanke Special Fund for Public Health and Health Discipline Development, Tsinghua University (NO.20221080053), Guoqiang Research Institute General Project, Tsinghua University (No. 2021GQG1012); A APPENDIX A.1 ANTIBODY SPECIFIC EVOLUTION Antibodies, composed of two identical heavy chains and two identical light chains, form a large Y-shaped structure, where the two tips are responsible for pathogens binding. Antibody evolution, described by sequence-sequence relationships between ancestor and progeny antibodies, reflects antibodies’ key antigen-binding function (Honjo & Habu, 1985). During antibody evolution (Figure 7), the initial diversity is encoded into the ancestor sequence through randomly recombination of V-, D- and J-gene segments. Upon exposure to a pathogen, the sequence undergoes frequent sequence mutations to search for progeny sequences with optimal binding specificity. Sequence evolution analysis has been employed by many computational biology studies and shows promising results in antibody-related tasks, such as disease diagnosis and therapeutic antibody development (Yermanos et al., 2018; Miho et al., 2019). Importantly, antibody evolution is significantly different from that of proteins. Antibodies only contain hundreds of thousands ancestor sequences so-called germline. To bind dozens of millions of diverse antigens, antibodies need to mutate from the ancestor sequences to gain new functions (Figure 7). Therefore, the non-conserved amino acids (mutated ones) plays important roles for structure and function. On the contrary, the conserved amino acids (not mutated) in proteins determine structure and function. During protein evolution, evolutionary pressure to maintain protein structure and functions leads to the conservation or co-evolution of residues located in structural folding core for binding interface. Diverse methods have been developed to extract the co-evolution information from conserved amino acids sequences for structure and function prediction, such as AlphaFold (Jumper et al., 2021). In brief (Figure 7), antibody evolution specificity distinct from that of proteins can be defined with two main features: (i) ancestor germlines; (ii) the mutated amino acids of germlines. Protein evolution Antibody evolution A.2 DATA PROCESSING DETAILS Pairing Antibody with Germline For germline annotation in the pre-training task, we used the annotated germline sequences provided in the OAS database (Kovaltsuk et al., 2018). For downstream benchmarks tasks like B-cell classification, therapeutic antibody engineering, and disease diagnosis, we completely followed the methods shown in the OAS database paper. IgBLAST, an immunoinformatic benchmarking tool for the analysis of B-cell antibody repertoires was used for germline annotation (Ye et al., 2013). The antibody nucleotide-containing FASTA file was aligned to germline and translated to amino acids using IgBLASTn. The antibody amino-acid sequence was aligned using IgBLASTp. The germline databases for human patients used ImMunoGeneTics (IMGT) germline sequences derived from Lefranc et al. (1999). For each antibody, usually, multiple germline sequences can be obtained and only the single sequence showing the highest confidence score for the alignment was chosen. Pre-training Data Processing We downloaded OAS Oct 2021 version from its website and removed duplicate sequences. To avoid data leakage, we cluster sequences based on the CDR3 sequence and filter each cluster by 70% identity over the whole sequence using Linclust (Steinegger & Söding, 2018). Then, we shuffle the dataset and split it into 100k-size chunks. The last chunk is used as the validation set. The dataset size is 20,245,249 and 45,249 are used for validation. A.3 ATUE DETAILS We summarize the tasks used in ATUE in Table 5 and discuss each task in detail in this section. Antigen Binding Accurate antigen-binding prediction approaches could allow significantly more efficient antibody discovery with higher affinity. Machine learning methods have already achieved some success in antibody binding capacity optimization. We collect the antigen-binding data from Mason et al. (2021) and follow the training/validation/test split of 15,128/3,242/3,242. The original dataset only has CDR3 fragments, and we extend them to the full antibody sequences. For crossvalidation, we split the dataset by antibody sequences to ensure that no antibody sequences overlap between 90% training and 10% validation. Paratope Prediction Paratope is the antibody residues involved in antigen binding. The ability to accurately map the paratope can provide detailed knowledge about the binding mechanism and accelerate antibody discovery. 1D sequence-based deep learning methods have been employed for paratope prediction. The paratope data is collected from Liberis et al. (2018) with 1,662 CDR segments on 277 antibodies. Each antibody contains three CDR fragments (CDR1, CDR2 and CDR3) in the heavy chain and three CDR fragments in the light chain. We also search the full sequence for each antibody and use the whole sequence as input. For cross-validation, we split the dataset by antibody sequences to ensure that no antibody sequences overlap between 90% training and 10% validation. B Cell Analysis We formulate a 6-category classification task for B cell maturation analysis, which includes {immature, transitional, mature, memory IgD+, memory IgD-, plasmacytes,}. The analysis of B cell maturation plays an important role in understanding the mechanisms underlying B cell responses in the immune system Ghraichy et al. (2021); Meffre et al. (2000). The order of B cell type follows the evolutionary process in the immune system, from an immature state to a transitional state, and finally becomes a memory B cell. Both memory IgD- and IgD+ belong to memory B cells with different isotypes, and they have a high affinity to foreign antigens. Among the other categories, the Plasmacytes PC sequences also have some affinity ability. It is widely reported that changes in antibody sequence patterns correlate with B-cell maturation. Therefore, we use this task to evaluate the representation learning capacity of the language model. We collect 88,094 sequences from Mroczek et al. (2014). They extracted from the peripheral blood of healthy adults and got six types of B cells with different maturity and antibody sequences. The distribution of various types of B cells in the dataset is shown in Table 6 Antibody Discovery Antibody discovery from B cell repertoire has been widely recognized as a novel trend to improve the efficiency of antibody discovery for diverse antigens (Weiner, 2015; Pedrioli & Oxenius, 2021). However, previous studies highly rely on expensive wet-lab experiments (Cao et al., 2020; Shiakolas et al., 2022). Deep learning-based methods have shown the potential capacity to help antibody discovery by reducing cost and increasing efficiency (Widrich et al., 2020; Wang et al., 2022). Here, we ask whether pre-trained models can benefit real-world problems and enable fast-track neutralization of SARS-CoV-2 antibody discovery. In the first step, we develop a sequence classifier to distinguish which antibody sequence from the numerous sequences is responsible for the recognition of SARS-CoV-2. This task is highly challenging since we can hardly get the sequence-level disease label that indicates whether the antibody sequence is related to the disease. Thus, we follow the practice of Roskin et al. (2020); Zaslavsky et al. (2022) to use the individual label as the rough sequence label and train a sequencelevel predictor. Then, with the help of a sequence-level predictor, we can give each sequence a most likely label to help antibody discovery, whose accuracy has been verified by the excellent results on individual prediction, which may accelerate the discovery of new antibody sequences. We follow the condition of Kim et al. (2021) to filter SARS-CoV-2 antibody data from the OAS database. The basic condition is ‘Chain = heavy; Isotype = IGHG; BSource = PBMC; Species = human; Vaccine = None’. We further add the condition of ‘Unique Sequences >= 10000’. For health/SARS we set the ‘Disease’ field to ‘None’, ‘SARS-CoV-2’. Then we obtain 87/133 patient profiles for each type. To make a balanced dataset, we limit the size of the health profile and mix up the healthy ones and the ones with the SARS-CoV-2. For cross-validation, we randomly split the dataset by profiles 10 times: 90% for training and 10% for validation. We further select sequences with top100 redundancy to make the positive labels more accurate. A.4 QUANTITATIVE ANALYSIS OF ATUE TASK SPECIFICITY It is important to include statistical significance tests relative to the antibody-specific features in antibody functional tasks we proposed in the ATUE benchmark. According to the evolution process shown in Figure 7, antibody evolution specificity distinct from that of proteins can be defined with two main features: (i) ancestor germlines; (ii) the mutated amino acids of germlines. We implemented statistical significance tests of (i) ancestor germlines subtype usage; (ii) the number of mutated amino acids in antibodies against different labels of downstream tasks in ATUE to quantitatively assess the "Task specificity". The analysis is now summarized in Table 7. Generally, it is clearly shown that ATUE benchmark comprises antibody tasks showing different scales of antibody specificity for later modeling analysis. Moreover, they are used for statistical analysis of task specificity and pre-training model objectives in our study. Antigen Binding In the Antigen binding dataset, both antibody binding and none antigen-binding sequences share the same germline subtype sequence (IGHV3.1) (Figure 8A) as well as the same number of germline mutations 8B). Therefore, None of two antibody-specific features show significant distribution differences between data with different labels, demonstrating antigen binding is a task with low antibody specificity. Paratope Prediction For the paratope prediction task, we first evaluate the germline subtype distribution difference between sequences with different numbers of binding sites (Figure 9A). Kruskal Wallis test showed a p-value of 0.296 suggesting germline subtype usage is not statistically significant. Also, we find the binding sites can be significantly mapped with more germline mutations than the non-binding sites, which is consistent with the knowledge of antibody specificity definition (Figure 9B). One out of two antibody-specific features shows significant distribution differences between data with different labels. Therefore, we define this task as a medium specificity task. B Cell Analysis As shown in Figure 10, the distribution of the germline usage as well as the number of germline mutations are significantly different between antibodies in B cells with different developmental stages. This observation is highly consistent with previous studies Mroczek et al. (2014); Ghraichy et al. (2021). Since both of the antibody-specific features show significant distribution differences, this task is defined as a high-specificity task. SARS Antibody Discovery Antibodies in SARS patients and healthy ones show a significant difference in their germline subtype usage and the number of germline mutations (Figure 11). This observation is highly consistent with previous studies showing SARS antibody convergent among patients Galson et al. (2020). Since both of the antibody-specific features are highly significant, this task is defined as a high-specificity task. A.5 MODEL TRAINING DETAILS Antibody can be represented as A = {a1, a2, · · · , am} and the germline of individual antibody can be represented as G = {g1, g2, · · · , gn}, where m and n are the lengths. Each token ai or gj in the sequence is called a residue that belongs to the amino acid set A. A includes 20 common amino acids with a residue ‘X’ that indicates the residue is unknown (mostly in the germline). Typically, antibody PLMs are trained with basic mask language modeling objective lMLM on the antibody sequences S = A = {a1, · · · , am, }. A.5.1 EVOLUTION-AWARE PRETRAINING In order to incorporate the evolutionary information into the pre-training, we pair the antibody sequence A with its germline G and concatenate them into a long sequence with a special token ‘[SEP]’ as the delimiter: S = {s1, · · · , sm+n+1} = {a1, · · · , am, [SEP], g1, · · · , gn}. Thus, we optimize the MLM objective on the long sequence S: lMLM = − 1 |M | ∑ i∈M log p(si|S\M ), (1) where M is the index set of masked tokens. It helps the model learn the basic residue distribution for antibody sequences. Besides, it can also capture the interaction between residues of the antibody and its germline. Ancestor Germline Prediction The ancestor relationship between the antibody and its germline determines the shared biological functions obtained in the evolution. Antibody sequences with similar residues evolved from different germline sequences may have different biological functions. When stimulated by a foreign antigen, the common ancestor germline evolves to various antibody sequences. Similar antibody sequences may have different germline sequences, which will affect their biological functions. Thus, the aim of this task is to determine whether the antibody has an evolutionary relationship with the given germline. During training, we substitute the paired germline G with random germline G′ = {g1, · · · , gn} in the batch via a probability p = 0.3. The new sequence is denoted as S′ = {a1, · · · , am, [SEP], g′1, · · · , g′n} and the training loss can be described as: la = − log p(y|S′), (2) where y ∈ {0, 1} indicate whether the noisy germline G′ is the ancestor of the antibody S. It can help the model to distinguish the ancestor germline of the antibody by capturing the shared features. Mutation Position Prediction The somatic hypermutations on the germline further give progeny antibodies the specificity of binding with the specific antigen. In order to model this specificity, this task focuses on predicting the mutation positions and the residues mutated. Specifically, for each token gj in the germline G, the target is to predict a label yj ∈ {0, 1} to indicate whether this token has been mutated. For the antibody sequence S, we mask the mutation position and predict these tokens. The objective can be formalized as: lm = − 1 n ∑ j∈{1,··· ,n} log p(yj |S\M ′)− 1 |M | ∑ i∈M ′ log p(ai|S\M ′). (3) Here, M ′ is the ground-truth mutation position and we mask these tokens on the antibody sequence. This task is more difficult than MLM which equally masks tokens in the L, because the tokens on the mutation position of A get less information from the germline, compared with other shared residues between the antibody and the germline. By optimizing this objective, the model learns to capture the specificity obtained from the somatic hypermutation in the evolutionary process. A.5.2 IMPLEMENTATION DETAILS We use the base Transformer architecture (Vaswani et al., 2017) with 12 layers, 12 heads, and 768 hidden states. The total parameters are 86M. We use Adam optimizer (Kingma & Ba, 2015) with a maximum learning rate of 2e-4 and a warm-up step of 24,000. The maximum length is set to 400 since most antibody sequences are shorter than 180. We first pre-train our model with the MLM objective. During the pre-training, 15% tokens are randomly selected with 80% masked, 10% replaced, and 10% kept. Then we conduct further pre-training on two antibody-related tasks with a smaller learning rate of 1e-5. For each task in ATUE, we finetune the model with supervised data. We follow the standard split of Antigen Binding Prediction. For other tasks that do not provide a standard split, we conduct 10-cross validation and report the average results. Since our pre-training model learns the representation of the antibody sequence, we expand the CDR fragment to the full antibody by searching the biological database for therapeutic antibody engineering tasks. For finetuning, we limit the max epochs to 30 and use the Adam optimizer with a max learning rate of 3e-5. We use the mean representation of 12 layers as the sequence representation. The model is trained with 108,000 steps and gets a 0.9606 token accuracy on the MLM task. It takes further steps for AGP and MPP. The model quickly converges for AGP and gets a 0.99 accuracy on the ancestor germline prediction because more than 80% residues are shared between the antibody and its germline. For MPP, it can predict the mutation position with the accuracy of 1.000 and obtains a 0.442 accuracy in the mutation position (EATLM w/o AGP). It means that the model can easily find the mutation positions by the self-attention between the antibody and germline, but it is still difficult to predict which residues this position will mutate to. We assume it is because the ancestor germline can undergo different somatic hypermutations and get various progeny antibodies, resulting in different valid mutations at the same position. We also compare this mutation accuracy with the model without MPP, which is only trained with MLM on the concatenation of the antibody and its germline. With a high prediction accuracy of 0.889 on all positions, it achieves only a 0.031 accuracy on the mutations. It implies that the masking among all positions on the sequence can do accurate predictions of the shared residues but hardly capture the mutation information. We also conduct AGP and MPP to finetune the baseline model AntiBERT. The pre-training results are shown in Table 8. We can find that without the concatenation of the antibody and its germline, it is difficult to predict the ancestor relationship. It also underperforms than EATLM in MPP. Negative sampling ratio We have tried the ratio of 0.1/0.3/0.5/0.75 and found that this ratio has little influence on performance and convergence speed. As we discussed above, the model can quickly converge for AGP and get an accuracy of 0.99. Finetuned Protein Language Models and Larger Architecture We pre-train our method with a larger architecture and compare it with ESM-1b, which also has 650M parameters. We also further pre-trained the ESMs to transfer to the antibody field. After that, we evaluate them on the antigen binding and paratope prediction task. The results are shown in Table 9. The result shows that the larger architecture does show an advantage in terms of performance improvement. For antigen binding, ESM-1b has better performance than ESM-1. However, in paratope prediction, it performs worse. In addition, for ESM, fine-tuning the antibody dataset may cause the overfitting problem, leading to a decrease in the performance of all three tasks. A.6 LIMITATION ABOUT EATLM First, EATLM doesn’t use any 3D structure information during pre-training. As a special subgroup of proteins, antibody structures provide much more information such as geometry than sequences. In the future, recruiting structure information for antibody pre-training may be able to improve the results. However, the data scale available for antibody structure is dramatically less than that of antibody sequences. The largest dataset of antibody structures only contains thousands of 3D high-resolution structures, while the number of antibody sequences is in billions. Using structure prediction methods like AlphaFold may help to bridge the gap between sequences and structures. Second, EATLM requires germline as input during downstream tasks, this will slow down the prediction speed. A.7 NEW SARS BINDER DISCOVERY The main challenge for disease diagnosis is to distinguish the disease-related antibodies from millions of antibody sequences in the individual profile, as stated in Section A.3. Here, with the help of a sequence-level predictor, we can give each sequence a most likely label to help antibody discovery, whose accuracy has been verified by the excellent results on individual prediction, which may accelerate the discovery of new antibody sequences. SARS Sequence-level Predictor We first train a sequence-level predictor for SARS-CoV-2. The results are shown in Table 10. Compared with Figure 4 in the main text, we find that good results in the sequence-level predictor do not necessarily mean good results in the antibody discovery. It can be mainly affected by the noisy label of the sequence level. Figure out SARS Binders As shown in Table 3 in the main body, we find 2 true SARS binders and 9 potential binders with the help of EATLM. Specifically, we first use our sequence-level predictor to get a probability score for each sequence in the SARS dataset. Then we select the sequence with a high-ranked score (the probability > 0.5) and compare them with the public Cov-AbDab database Raybould et al. (2021) 1, which contains data on published/patented antibodies known to bind to SARS-CoV-2 (Raybould et al., 2021). Since the CDR3 fragment in the heavy chain is the most relevant to the binding of antibody and antigen, we calculate the edit distance between the CDR3 fragments in heavy chains (CDR-H3) with those of the known binder and use a threshold of 85% similarity as the sequence identity. 85% Hamming distance for B cell antibody sequence clustering (identify similar B cell antibody sequences responding to the same antigen/epitope) was previously suggested in this paper (Gupta et al., 2017). This method then was widely used for B cell antibody repertoire analysis in different studies (Montague et al., 2021; Wang et al., 2022). SARS Binder Analysis To provide a more intuitive analysis of the similarity between our predicted antibody and true SARS-CoV-2 binders, we investigate the 3D structure of the true binding antibodies and the mutation site of our predicted sequence on the corresponding structure. High resolution structure of true binding antibody #3 in Table 3 with SARS-CoV-2 are shown in Figure 13 (PDB code: 7N62). The interaction interface between the antibodies and SARS-CoV-2 spike/RBD is shown in Figure 3 in the main body. CDR-H3 were shown in orange. Only one single residue highlighted in red is different between the predicted binder and the true binder. Obviously, these different residues don’t localize to the direct binding site and CDR-H3 founding core, suggesting the sequence difference likely will not affect antibody-virus interaction. Furthermore, we found the epitopes of the 11 identified SARS-CoV-2 antibodies cover a wide range of different structures from traditional RBD domain to novel non-RBD epitopes like S2 and NTD as shown in Table 3. This result shows our method enables diverse-epitope antibody discovery. Threshold Total Hit Hit rate (%) Probability Threshold Sensitivity In order to investigate the influence of the threshold used to determine the potential binders, we try different thresholds in Table 11. Here, the probability threshold means that if the sequence predictor gives a probability higher than the threshold for one sequence, it will be viewed as a potential binder. If the predicted binder has a sequence similarity higher than 1http://opig.stats.ox.ac.uk/webapps/covabdab/ 85% with the existing binders in Cov-AbDab, we view it as one hit. As the threshold score increases, the hit rate corresponding increases from 0.528% to 0.562%, indicating that our model may enable priority selection of SARS-CoV-2 antibodies and reduce experimental costs. Sequence Similarity Sensitivity In previous work, two antibodies with CDR-H3 similarity over 85% can be viewed as similar and have a high probability to share the same functionality. And here we also check the influence on the binder matching of different thresholds of similarity. The results are shown in Figure 14. Here, we fix the probability threshold as 0.5. As we can see, the baselines have similar trends in all thresholds. If we relax the threshold, there will be more matching sequences. However, the predictors will have less advantage over the random order, which indicates that the ranking is less important if we relax the similarity threshold. The Potential of New Binder Discovery During the training of our sequence-level predictor, we have no reliable ground-truth labels, which means that the model has never known which sequences can bind to SARS in a real-world scenario. However, the model can learn from the noisy data and rank the real SARS binders with high probabilities. Sequence identity of 1 means that the CDR-H3 fragment can be directly found in the Cov-AbDab database, which implies that the sequences have been verified by wet laboratory testing. The other sequences with an identity over 90% are thought to have a similar binding performance to existing binders, indicating that they are promising SARS binders that can help the discovery of therapeutic antibodies for SARS-CoV-2. A.8 EXTENTED STUDY FOR DISEASE DIAGNOSIS It would be interesting to see whether our sequence classifier can be used for other applications, such as disease diagnosis. Each human is estimated to maintain about 108 − 1010 distinct antibody sequences, constructing an informative encyclopedia recording the past and present health and disease. Interpreting the pattern of the sequences has already proved useful in disease diagnosis and allows us to assess many infectious diseases without expensive laboratory testing. However, it is difficult to distinguish which antibody sequence from the numerous sequences is responsible for the recognition of the specific antigen, which hinders the discovery of the antibody for diseases (Zaslavsky et al., 2022; Lu et al., 2018; Greiff et al., 2020). Benefiting from the recent high-throughput sequencing, we can obtain millions of antibody sequences from the individual human. At the same time, we can get a disease label that indicates whether the human is infected by the disease. The main challenge is that we can hardly get the sequence-level disease label that indicates whether the antibody sequence is related to the disease. Thus, we follow the practice of Roskin et al. (2020) to use the individual label as the rough sequence label and train a sequence-level predictor. Then we use this predictor to predict sequences of the individual profile and make the trimmed mean score as the individual score. We use the same data processing as Antibody Discovery stated in Section A.3. For health/SARS/HIV/Ebola/Allergy/SLE/MS, we set the ‘Disease’ field to ‘None’, ‘Ebola’, ‘Allgery’, ‘SLE’,‘MS’. Then we obtain 87/133/51/14/12/8/8 patient profiles for each type. We also do 10-cross validation and select sequences with high redundancy. Disease Classification We use all these disease profiles to build the Q7 classification task for disease diagnosis. Previous biological studies mainly use this multi-classification task for disease diagnosis Zaslavsky et al. (2022); Wang et al. (2022), highlighting the discriminatory power among different diseases is important for disease diagnosis. The results are shown in Table 12. We found both PPLM and PALM show comparable results as the randomly initialized model, suggesting the finetuning part plays a more important role and the pre-trained language model cannot help this task. Sequence-level Predictor for Various Disease As before, we train a sequence-level predictor for each disease. The results are shown in Table 13. Compared with Table 4 in the main text, we find that good results in the sequence-level predictor do not necessarily mean good results in the individual-level predictor. It is mainly due to the trimmed mean we use to get individual-level results, which is a central estimate that is robust to noise labels. Overall, our model has comparable results to other models in terms of sequence prediction with noisy labels and has better results for individual diagnosis. Individual-level Predictor for Various Disease It is observed our evolution-aware EATLM performs the best in the individual-level classifier to determine whether the patient suffering from SARS. Besides, PALMs significantly outperform PPLMs. The results are shown in Table 14.
1. What is the focus of the paper regarding antibody understanding tasks? 2. What are the strengths of the proposed methods, particularly in incorporating biological information? 3. What are the weaknesses of the paper, especially regarding the evolution information and the effectiveness of the proposed pre-training objectives? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns regarding the created benchmark, such as data quality and task importance?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies the different pre-training models for the antibody understanding tasks, propose new methods with biological information for pre-training, and a new antibody understanding benchmark is created. With different study experiments, the authors conclude several observations from different perspectives. Strengths And Weaknesses Strength: The authors study the antibody understanding tasks, where antibody is the main and crucial element in drug discovery. The authors propose a new benchmark for the antibody understanding tasks, which contain four specific applications. The authors propose new biological information involved antibody pre-training methods, which improve the understanding of antibody. The authors study different pre-training models for antibody and they have several conclusions. The paper is clear and easy to follow. Weaknesses: The authors described about the evolution information about the antibody. In their words, the antibody mutation is targeted at the specific objectives, for example to target on the specific antigen. This is somehow questionable, which is a result driven conclusion. Indeed, protein is also randomly mutated, while the last kept ones have specific structures, functions and so on. The differences between antibody mutation and protein mutation is hard to be convinced. The authors propose two new biological information (evolution) based pre-training objectives, which are actually straightforward. Though they are reasonable, as far as I see, the results are hard to say that these two are effective enough. In terms of these pre-training, different reasons may cause the performance change. I would like the authors to provide more details about the pre-training. For example, how to evaluate the pre-training performances. Indeed, the current ways are like multi-task pre-training. This is a little not enough. As for the created benchmark, one question is about the data, the authors mentioned the different specificities of these antibodies. I feel good about this, but the datasets seem not be so good enough. The first two tasks are from the same dataset, also the first affinity prediction is much like the last lack, only specific to covid. Besides, the performances on some tasks are already 0.8-0.9, which seem to be somehow good enough. That's what doubted me about the importance of these tasks. Clarity, Quality, Novelty And Reproducibility See above. Novelty is incremental.
ICLR
Title On Pre-training Language Model for Antibody Abstract Antibodies are vital proteins offering robust protection for the human body from pathogens. The development of general protein and antibody-specific pre-trained language models both facilitate antibody prediction tasks. However, there have been limited studies that comprehensively explore the representation capability of distinct pre-trained language models on different antibody tasks. To investigate the problem, we aim to answer several key questions in this paper, such as how pre-trained language models perform in antibody tasks with different specificity and how introducing specific biological mechanisms to the pre-training process can benefit the model. Additionally, we evaluate if the learned antibody pre-trained representations can be applied to real-world antibody problems, like drug discovery and immune process understanding. Previously, no benchmark available largely hindered the study to answer these questions. To aid in our investigation, we provide an AnTibody Understanding Evaluation (ATUE) benchmark. We comprehensively evaluate the performance of protein pre-trained language models by empirical study along with conclusions and new insights. Our ATUE and code are released at https://github.com/dqwang122/EATLM. 1 INTRODUCTION Antibodies are a type of protein that is useful for diagnosing and treating a variety of diseases, including SARS-CoV-2 (Zhu et al., 2022). It is crucial to understand the information contained in antibody sequences to develop effective therapeutic antibodies and advance our understanding of the immune system (Greiff et al., 2020; Lu et al., 2018; Yermanos et al., 2018). Recent advances in general Pre-trained Protein Language Models (PPLM) and specific Pre-trained Antibody Language Models (PALM) offer new possibilities for antibody-related tasks. For example, PPLMs have shown promising results in transferring learned representations to antibody tasks (Kim et al., 2021; Zaslavsky et al., 2022) and PALMs have been found to improve model performance in antibody paratope predictions (Leem et al., 2022). Despite these successes, few studies have thoroughly examined the capability of different pre-trained language models (e.g. general PPLMs and specific PALMs) on various antibody tasks, which hinders the development of better architectures for antibody discovery and modification. To investigate this problem, we compared the performance of the pre-trained protein language model ESM (Rives et al., 2021), the pre-trained antibody language model AntiBERT (Leem et al., 2021), a pre-trained antibody language model EATLM by introducing antibody specific mechanisms, and a model trained from scratch (No Pretrain) on three antibody tasks with varying levels of specificity. The result is illustrated in Figure 1. Here, ∗Work was done when Danqing Wang was in Bytedance Research. specificity refers to the antibody’s unique evolution processes distinct from that of protein to obtain functionality, such as the ability to bind antigen (The definition is discussed in detail in §3.1). We can see that while ESM performs well in tasks that are less antibody specific, its performance decreases significantly in tasks that are more specific. Additionally, AntiBERT does not demonstrate a clear advantage over the non-pre-trained model in the high-specificity task. These results highlight the limitations of current pre-training language models for antibody-related studies. Using general PPLM representations directly may harm performance, and current pre-training strategies for PALMs may not fit the specific biological functions of antibodies. This emphasizes the need for a comprehensive model design guideline for various antibody tasks. Our main focus is to address the following questions: (I) How well will pre-trained language models perform on antibody tasks with varying specificity? Addressing of the question is mainly hindered by two challenges: the lack of a reliable antibodyspecific benchmark for performance evaluation and comprehensive studies of current PPLMs and PALMs. (II) Can incorporating biological mechanisms, specifically antibody-specific evolution, into the pre-training process provide additional benefits for antibody representation learning? This idea has been explored in several computational biology studies, which have demonstrated promising results in antibody-related tasks such as disease diagnosis and therapeutic antibody development (Yermanos et al., 2018; Miho et al., 2019). Then, it is interesting to know whether antibody representation learning can benefit from the incorporation of antibody-specific evolution information. (III) Are the pre-trained antibody representations useful in practical applications, such as drug discovery and immune process understanding? Antibodies are critical in drug development, and it is essential to determine whether pre-training representations can be beneficial for biologists to comprehend antibody functions or develop drugs. To investigate these questions, we first propose antibody study benchmark AnTibody Understanding Evaluation (ATUE). This is the first antibody benchmark with four real-world supervised tasks related to therapeutic antibody engineering, B cell analysis, and antibody discovery. These tasks cover a range of specificity levels to evaluate models on different aspects of antibody biological functions. Based on ATUE, we conduct empirical studies to investigate the representation ability of distinct pre-trained language models. To explore the impact of incorporating specific biological mechanisms in antibody pre-training, two objectives are introduced to tailor masked language modeling for evolution: (1) Ancestor germline prediction guides the model to discriminate the evolutionary relationship between antibody and ancestral sequences. (2) Mutation position prediction mimics hypermutation during the evolution. These methods are used to investigate the representation ability of antibody evolution-tailored language model. Finally, we take a close look at the SARS-CoV-2 antibody discovery to investigate the pre-trained representation under a real-world scenario. We have three main contributions in this study: • We created the first comprehensive antibody benchmark called ATUE to help with antibody application studies, which includes four real-world supervised tasks ranging from low to high specificity. We also introduce two new objectives for antibody pretraining that incorporate antibody-specific evolutionary information. • We made key observations for providing guidelines for better antibody representation. Firstly, PPLMs perform well on antibody tasks that have a high relationship with structure, but they perform poorly on tasks with high antibody specificity. Secondly, in most cases, PALMs perform as well as or even better than PPLMs with less pre-training data. Thirdly, PALMs can be improved by incorporating the evolution process, but the evolution information from MSAs does not always benefit antibody tasks. • We identified 11 potential SARS-CoV-2 binders that have highly identical sequences to existing therapeutic antibodies that bind to the virus, which could accelerate real-world antibody discovery. 2 RELATED WORK Our work focuses on researching the effectiveness of protein and pre-trained antibody language models for antibody-specific tasks. Below we review the representative existing methods. We list the details in Table 1. Pretrained Protein Language Models (PPLMs) There is an increasing interest in exploring largescale language models using protein sequences (Rao et al., 2019; Madani et al., 2020; Meier et al., 2021; Chen et al., 2022). These models have been shown to achieve state-of-art capacity in predicting protein structure and function. ProtTrans (Elnaggar et al., 2021) and ESM-1b (Rives et al., 2021) take individual protein sequences as input and adopt Transformer language models for pre-training, demonstrating that self-supervision is a promising paradigm for protein secondary structure, contact, homology predictions, and function prediction. To extract evolutionary information from protein sequences, Rao et al. (2021) proposed the MSA-transformer/MSA-1b model utilizing multiple sequence alignment (MSA) instead of a single query sequence as input. This model is superior to ESM1b for structure prediction, demonstrating evolution information can benefit protein representation learning. Despite the progress in the field, few studies reported their results on antibody tasks. Pretrained Antibody Language Models (PALMs) Encouraged by the success of PLMs in protein representation learning, series work seeks to learn antibody representations based on sequences of antibodies. AntiBERTy (Ruffolo et al., 2021) proposed the first antibody-specific language model, exploring a Transformer trained on 558M natural antibody sequences in the OAS database. Olsen et al. (2022b) train two language models for antibodies: A heavy chain version Ablang-H and a light chain version Ablang-L. The study reported transfer learning results on restoring missing residues of antibody sequences, which is a task similar to pre-training objectives. AntiBERTa (Leem et al., 2021) train the antibody language model on OAS and finetuning AntiBERTa for paratope position prediction, achieving state-of-the-art performance. Recently, Li et al. (2022) proposed an antibodyspecific language model and explored its performance in SARS-CoV-2 antigen binding, showing context-dependent representations of antibody sequences benefit binding prediction. 3 FRAMEWORK In this section, we first give a brief introduction to the antibody and its specific evolution. Then we propose the first antibody-specific benchmark (ATUE) composed of four tasks with different specificities. Finally, we implement several PPLMs and PALMs baselines and design an evolutionaware PALM to incorporate the biological mechanism into the pre-training process. 3.1 BACKGROUND Antibody Antibodies are vital proteins generated by the immune system to remove harmful foreign pathogens in the human body. they can specifically bind to antigens on the pathogen and recognize it. Antibodies are composed of two identical heavy chains and two identical light chains and form a large Y-shaped structure. Two tips on it contain highly variable loops, called Complementarity Determining Regions (CDR), which function for antigen binding. Antibody Specific Evolution Notably, the antibody evolution process is significantly different from that of proteins, providing a good opportunity for us to investigate the impact of general PPLMs on specific subdomains. To perform its protective function, the antibody sequence undergoes evolution selection to search for optimal patterns that can specifically recognize pathogens (Honjo & Habu, 1985). Deciphering the information stored in antibody sequences may benefit our understanding of disease and accelerate therapeutic antibody development (Greiff et al., 2020; Lu et al., 2018; Yermanos et al., 2018). During evolution, the random recombination of V/D/J-gene segments provides the Task Specificity HighLow initial diversity for the ancestor sequence (germline). Upon exposure to a pathogen, this sequence undergoes frequent sequence mutations to search for progeny antibody sequences with optimal binding specificity. In other words, gene recombination provides millions of germlines in the human body, and the germlines further mutate into a huge number of progeny antibodies. Thus, the ancestor relationship between an antibody and its corresponding germline as well as the mutation it undergoes together determine the unique biological functions. In brief, the evolutionary relationships between antibodies arise to gain new functions such as antigen binding. It is significantly different from that of proteins, which are to maintain certain functions across different organisms. We further illustrate this process in Figure 7 in §A.1. Unsupervised Antibody Corpus To obtain the evolutionary information of antibody sequences, we utilize Observed Antibody Space (OAS), a database containing more than 1.5 billion natural antibody sequences (Kovaltsuk et al., 2018; Olsen et al., 2022a) The antibody sequences in the database have been precisely annotated with evolutionary and structural information, including the paired germline and CDR3 for each antibody. To pair the antibody with its germline used in the pretraining task, we used the annotated sequences provided in the OAS database. Further information on data processing can be found in §A.2. 3.2 ANTIBODY UNDERSTANDING EVALUATION (ATUE) We provide four biologically relevant downstream prediction tasks to serve as antibody benchmarks, covering four major application aspects for antibodies in the real world: therapeutic antibody engineering, disease diagnostics, antibody discovery, and B cell maturation analysis. The antibody specificity of these tasks ranges from low to high, offering scaled tasks with subdomain specificity for pre-trained language model evaluation. Detailed information is listed in Figure 2. All data are publicly open and used under the right license. For each task, we focus on the following aspects and leave the details in Appendix (§A.3 and §A.4): [Definition] The formal definition of the task and the understanding ability required. [Impact] The importance of the task in the biological area. [Dataset] The data source and size. [Specificity] Antibody’s specific evolution characteristics are different from general proteins. We use several classification metrics to evaluate the performance. Accuracy (ACC) calculates the ratio of correct predictions. Matthews Correlation Coefficient (MCC) is the coefficient between true and predicted values. F1 is the average weighted score of precision and recall. AUC is the area under the ROC curve, which shows the performance at all classification thresholds. Antigen Binding Prediction is a binary sequence classification task to determine whether the CDR region of the antibody can bind to the specific antigen. [Impact] A better understanding of the binding affinity between antibodies and antigens can accelerate the affinity optimization of therapeutic antibodies. [Dataset] We collect the antigen binding data from Mason et al. (2021) and follow the training/validation/test split of 15,128/3,242/3,242. [Specificity] Low. All the antibodies sequence in the dataset are derived from a single germline sequence indicating the task is not antibody-specific evolution-related. Paratope Prediction It is to identify binding positions on the antibody sequence, which is a sequence labeling task to predict a 0/1 label for each residue of CDR fragments. [Impact] The exploration of paratope (binding positions between antibody and antigen) can help to understand the binding mechanisms of therapeutic antibodies. [Dataset] The paratope data is collected from Liberis et al. (2018) with 1,662 CDR segments on 277 antibodies. [Specificity] This task is medium specificity related because only partial antibodies from the database are derived from evolution. B Cell Maturation Analysis It is a 6-category classification task to distinguish the maturation stage of B cell antibody sequences. Each sequence belongs to one of {immature, transitional, mature, plasmacytes, memory IgD+, memory IgD-}. It requires the model to learn a representation sensitive to different maturation states. [Impact] It benefits the understanding of the mechanism during immune evolution, which is a critical biological process in the immune system affecting the function and antigen specificity of antibodies (Ghraichy et al., 2021; Meffre et al., 2000). [Dataset] We collect 88,094 sequences from Mroczek et al. (2014) with 6 maturation stages. [Specificity] High. Antibody evolution is highly coupled with B cell maturation (Meffre et al., 2000). Antibody Discovery The task is a binary sequence classification task to distinguish which antibody is directly responsible for SARS-CoV-2 binding. The task is highly challenging from two aspects: (1) Less than 1% of antibodies from SARS-CoV-2 patients are directly responsible for virus binding. (2) It is hard to get a reliable sequence-level classifier using unreliable and noisy individual-level labels. [Impact] Antibody discovery from B cell repertoire has been widely recognized as a important approach to accelerate antibody discovery for diverse antigens (Weiner, 2015; Pedrioli & Oxenius, 2021), and achieved great success for SARS-CoV-2 antibody discovery (Kovaltsuk et al., 2018; Cao et al., 2020; Shiakolas et al., 2022). [Dataset] We collected antibody sequences from 133 SARS-CoV-2 patients and 87 health persons from OAS and followed the processing pipeline of Kim et al. (2021). Inspired Zaslavsky et al. (2022), we match the high-ranked sequences with the sequences in the CoVAbDab (Raybould et al., 2021) database, which have been proved to bind SARS-CoV-2 using wet-lab experiments. [Specificity] High. It is widely reported antibodies derived from the same disease such as SARS-CoV-2 share strong convergent germline signals (Galson et al., 2020). 3.3 EXPERIMENT SETUP Based on the antibody benchmark ATUE, we evaluate the performance of current pertaining language models in different specificity tasks. Furthermore, to investigate the benefit of introducing the biological mechanism, we incorporate evolution information as the extra pretraining objectives for PALMs and propose EATLM. The detailed description of the objective and the implementation can be found in §A.5 Current Pre-trained language models Existing antibody and protein language models are summarized in Table 1. Since the code and pre-training data of AntiBERTa are not released, we train a BERT model named AntiBERT on the full OAS database following the same setting as the original study. MSA-1b (Rao et al., 2021) takes protein-specific evolutionary sequences (Multiple Sequence Alignment, MSA) as the input. Because it is hard to align sequences between antibodies due to the diversity of CDR3, we take the germline and create pseudo-MSAs with depth 2. We add a linear layer on top of the language models and finetune the whole model on the downstream tasks. Evolution-aware antibody pretraining method To incorporate the biological mechanism into the pre-training, we propose a model with evolution information: Antibody EvoluTion-aware pretraining Language Model. The antibody can be represented as A and the germline of the individual antibody can be represented as G. Typically, PALMs are trained with basic masked language modeling (MLM). Based on it, we design another two pre-training objectives to simulate the biological mechanism of antibody evolution. The evolutionary relationship between the antibody and its germline includes two folds: (i) Whether the antibody and the germline have an evolutionary relationship. (ii) How to mutate residues from the germline to get the specific antibody. Two evolution-related objectives are introduced to solve the above questions: Ancestor Germline Prediction (AGP) and Mutation Position Prediction (MPP). For ancestor germline prediction, we substitute the paired germline G with random germline G′ in the batch via a probability p. The model is made to distinguish the ancestor germline of the antibody by capturing the shared features. To predict mutation position, for each token in the germline G, the objective is to predict a 0/1 label for each token to indicate whether this token has been mutated. For the antibody sequence S, we mask the mutation position and predict these tokens. Hyper-parameters We use the base Transformer architecture (Vaswani et al., 2017) with 12 layers, 12 heads, and 768 hidden states. For each task in ATUE, we finetune the model with supervised data. We follow the standard split of Antigen Binding Prediction. For other tasks that do not provide a standard split, we use a 10-fold cross-validation. Since our pre-training model learns the representation of the antibody sequence, we expand the CDR fragment to the full antibody by searching the biological database for therapeutic antibody engineering tasks. We also use the same Transformer architecture to train from scratch for each downstream task. This model is indicated as non-pretrain since it is not pre-trained on a protein/antibody database. Reproduction We conduct 10-fold validation on paratope prediction, B cell maturation analysis, and antibody discovery. For antigen binding prediction, we conduct three repetitive experiments with different random seeds. We report the average results and the standard derivation. 4 RESULTS AND ANALYSIS In this section, we present the experimental results and analysis for the representation capability of existing PPLMs, PALMs, and the EATLM method with evolutionary incorporation, using ATUE benchmark. Additionally, we summarize our observations aiming to address the problems highlighted in the introduction. 4.1 MAIN RESULTS Antigen binding We evaluate the performance PLMs models for antibody binding and paratope prediction, which are less antibody specific. The results in Table 2 indicate that PPLMs and PALMs perform similarly on these tasks, suggesting that PALMs can learn comparable general protein representations to PPLMs. Among different PALMs, Ablang-H outperforms Ablang-L and AntiBERT. It indicates that separate training for heavy and light chain sequences is beneficial for these tasks. Moreover, the introduction of AGP and MPP provides improvement over AUC and F1 metrics. Paratope prediction The results presented in Table 2 demonstrate that for paratope prediction, both PPLMs and PALMs can significantly boost the prediction accuracy over the model with pretraining. However, PALMs do not exhibit a significant advantage over PPLMs. EATLM outperforms other models, particularly in terms of F1 and MCC, while other models exhibit high recall and low precision, indicating that they tend to predict more residues as binding sites. With the incorporation of mutation residue prediction, EATLM can focus on the specific mutated positions adapted to bind with antigen. Among the two PPLMs, MSA-1b outperforms ESM-1 on F1 and MCC, which benefits from the structure information learning from MSAs. B Cell Analysis In this task, we investigate the ability of different pre-trained language models to distinguish between various B cell mature states during evolution. The findings, as demonstrated in Table 2, indicate that PPLMs are not effective in discerning minor differences between B cell sequences, resulting in mediocre results. Both ESM-1 and MSA-1b perform significantly worse than randomly initialized models. MSA-1b, in particular, performs poorly among all pre-trained language models, implying that representations that excel in protein structure prediction may be detrimental to antibody-specific tasks. Conversely, all PALMs show promising results for the task. This may be due to the fact that the general protein has little correlation with the specific antibody mature process and cannot capture this feature during protein pretraining. Our EATLM significantly outperforms the other PALMs. This is because our model can effectively capture the evolution feature and better distinguish between B cells at different stages of maturation by explicitly modeling the biological mechanism. We conduct further analysis to figure out whether our EATLM successfully captures sequence characteristics during the evolutionary process. We explore the probabilities of predicting antibodies in class i to class j. The results shown in Figure 3 reveal EATLM can easily classify the immature B cell with an accuracy of 0.9. It is consistent with the biological study that CDR3 sequence length in immature B cells is significantly shorter than that of the other mature B cells (Ghraichy et al., 2021). From the diagonal, we can figure out that our model tends to mistake the B cell sequences with their previous or post-evolutionary stage, consistent with the biological process. Antibody Discovery We investigated the potential of PPLMs and PALMs in aiding the discovery of antigen-specific antibodies for real-world problems. To achieve this, we followed a two-step process similar to Zaslavsky et al. (2022). Firstly, we created a sequence classifier to differentiate SARS-CoV-2 antibodies using noisy individual-level labels. Secondly, we compared the highly-ranked sequences with true binding sequences in the CoV-AbDab (Raybould et al., 2021) database to determine if there are similarities. We used a 90% sequence identity threshold to determine the likelihood of biological functionality similar to the existing binders. The experimental design for this is outlined in §A.7. Figure 4 shows the cumulative sum of matched sequences in the order of predicted probabilities by different pre-trained language models for the SARSCoV-2 specific antibody discovery task. We can ob- serve that PALMs outperform PPLMs in identifying potential binders, as the sequences predicted with high probability by PALMs match better with the existing binders. Moreover, among PALMs, EATLM significantly outperforms other models, with the red line indicating its performance. Initially, EATLM is the quickest method to find potential binders, but it loses to Ablang-H, and eventually overtakes again and converges. This suggests that EATLM is the most effective method for identifying all potential binders in this dataset. Furthermore, we list 11 potential binder sequences discovered by EATLM in Table 3. Without supervised labels, EATLM gives a high probability of 2 SARS-CoV-2 existing binding antibodies. Besides, EATLM suggests 9 potential sequences with high CDR-H3 sequence identity, indicating the potential for diverse-epitope antibody discovery and selection. These results demonstrate the potential of EATLM in therapeutic antibody discovery. To validate whether the antibody sequences with 90% sequence identity can indeed bind the same target, we investigate the 3D structure of the true binding antibody. Table 4 shows only one single residue difference between the predicted binder and the existing binder, suggesting the predicted binders are highly possible to interact with SARS-CoV-2. No Predicted Binder Existing Binder Epitope Identity SARS‑CoV‑2 NTD 4.2 HOW DOES EVOLUTION PRETRAINING TASK INFLUENCE THE REPRESENTATION? To comprehend the reasons for the better performance of EATLM on antibody-related tasks, we conduct the analysis of the pre-trained representations. The objective of this analysis is to evaluate the effectiveness of the evolution-aware pre-training strategies from two perspectives: (1) Does the pre-trained representation of antibodies reflect their ancestor relationship? (2) Does the specificity of antibodies get captured by the evolution objective? Ancestor Gerlime Visualization We perform UMAP visualization analyses in Figure 5. First, we observe that antibodies evolved from the same germline are nicely clustered together (Figure 5a and 5b), indicating the learned embedding is encoded with germline information. Besides, sequences with similar scales of evolutionary distance tend to cluster together, and a clear gradation of evolutionary distance can be observed in Figure 5c and 5d. The visualization provides a sanity check for the ability of EATLM to extract the sequence information of antibodies. Accuracy of Mutation Position Based on the specific evolution process described in §3.1, we can find the mutation during the evolution process bring specificity to the antibody. Thus, we explore the model’s ability to predict mutated residue from the masked token, which can reflect the specificity feature the model captures. We find that although AntiBERT can predict with an accuracy of 0.889 on all positions, it fails on mutation positions with a 0.031 accuracy. In contrast, EATLM achieves an accuracy of 0.443 on mutation position, which indicates that the model captures the specificity information. Note that during the MPP training, we mask the mutation position on antibody sequences, which are different from its germline. Thus, the model cannot get the mutated residue from the germline directly. The only way is to learn the underlying mutation rules. The full results are shown in Table 8 in Appendix. 4.3 KEY OBSERVATIONS The performance of pre-trained language models is highly dependent on the specificity of the task. In tasks with low antibody-specificity, PPLMs perform similarly to PALMs, indicating that using general protein representations from PPLMs is an effective way to transfer learning in these tasks. On medium specificity tasks such as paratope prediction, PALMs show their advantage and outperform PPLMs. However, for tasks with high specificity, PPLMs have significantly lower performance, suggesting that general pre-trained protein models are insufficient for antibody-specific representation learning. Additionally, incorporating protein evolution information does not always benefit antibody tasks, especially those that require antibody evolution information, as shown by the 20% decrease in performance observed with MSA-1b compared to the model without pre-training. This finding is consistent with the biological understanding that the mechanism of antibody evolution is significantly different from that of proteins. Performa nce Incre ase (%) Task SpecificityLow Medium High 010‑10‑20 20 ‑30 No PretrainESM‑1 AntiBERTEATLM MSA‑1b Ablang‑HAblang‑L Figure 6: Performance summary of various pre-trained language models. Incorporation of biological evolution mechanism into PALM generally benefits antibody prediction tasks. The inclusion of evolution-related training objectives assists in identifying mutation positions on antibodies, which is a distinguishing feature from germline. Notably, the performance increase of EATLM in comparison to other PALMs is linked with the level of task specificity. The ablation study showed removing the evolution-related pretraining objectives leads to decreased performance, confirming their contribution to the prediction task. Further research in this direction is promising and could offer more in-depth insights. Antibody pre-trained representations are helpful for real-world drug discovery. By utilizing the language model, we predict the likelihood of each antibody binding with SARS-CoV-2. Despite lacking precise sequence-level labels, we successfully identify 11 promising antibody binders. 5 CONCLUSIONS AND LIMITATIONS In this paper, we conduct a detailed investigation into the effects of pre-trained protein and antibody language models on various antibody tasks. To facilitate research in the antibody and machine learning fields, we provide ATUE consisting of four important antibody tasks from four different biological categories with varying levels of antibody specificity. However, there are certain constraints to our research. Firstly, due to the scarcity of data, the diversity of tasks in our ATUE is limited. As more data becomes available, we anticipate expanding our benchmark to include a greater range of diseases and larger data sets. Additionally, we did not examine any 3D structure information during antibody pre-training. As antibody structures offer more information than just sequences, such as geometry, incorporating structural information in future studies may lead to improved results. ETHIC STATEMENT This research involving the use of pre-existing data and computational methods did not involve any human or animal subjects, and therefore, no ethical approval was required. The authors followed all applicable ethical standards and guidelines for data analysis and reporting. All data used in this study were obtained from publicly available sources, and proper citation and attribution have been given. The authors have made efforts to ensure that the research presented in this paper does not infringe upon any existing copyrights or intellectual property rights. ACKNOWLEDGEMENT We thank members of ByteDance Research for discussion, Zaixiang Zheng and Yi Zhou for useful writing suggestions. Hao Zhou is supported by Vanke Special Fund for Public Health and Health Discipline Development, Tsinghua University (NO.20221080053), Guoqiang Research Institute General Project, Tsinghua University (No. 2021GQG1012); A APPENDIX A.1 ANTIBODY SPECIFIC EVOLUTION Antibodies, composed of two identical heavy chains and two identical light chains, form a large Y-shaped structure, where the two tips are responsible for pathogens binding. Antibody evolution, described by sequence-sequence relationships between ancestor and progeny antibodies, reflects antibodies’ key antigen-binding function (Honjo & Habu, 1985). During antibody evolution (Figure 7), the initial diversity is encoded into the ancestor sequence through randomly recombination of V-, D- and J-gene segments. Upon exposure to a pathogen, the sequence undergoes frequent sequence mutations to search for progeny sequences with optimal binding specificity. Sequence evolution analysis has been employed by many computational biology studies and shows promising results in antibody-related tasks, such as disease diagnosis and therapeutic antibody development (Yermanos et al., 2018; Miho et al., 2019). Importantly, antibody evolution is significantly different from that of proteins. Antibodies only contain hundreds of thousands ancestor sequences so-called germline. To bind dozens of millions of diverse antigens, antibodies need to mutate from the ancestor sequences to gain new functions (Figure 7). Therefore, the non-conserved amino acids (mutated ones) plays important roles for structure and function. On the contrary, the conserved amino acids (not mutated) in proteins determine structure and function. During protein evolution, evolutionary pressure to maintain protein structure and functions leads to the conservation or co-evolution of residues located in structural folding core for binding interface. Diverse methods have been developed to extract the co-evolution information from conserved amino acids sequences for structure and function prediction, such as AlphaFold (Jumper et al., 2021). In brief (Figure 7), antibody evolution specificity distinct from that of proteins can be defined with two main features: (i) ancestor germlines; (ii) the mutated amino acids of germlines. Protein evolution Antibody evolution A.2 DATA PROCESSING DETAILS Pairing Antibody with Germline For germline annotation in the pre-training task, we used the annotated germline sequences provided in the OAS database (Kovaltsuk et al., 2018). For downstream benchmarks tasks like B-cell classification, therapeutic antibody engineering, and disease diagnosis, we completely followed the methods shown in the OAS database paper. IgBLAST, an immunoinformatic benchmarking tool for the analysis of B-cell antibody repertoires was used for germline annotation (Ye et al., 2013). The antibody nucleotide-containing FASTA file was aligned to germline and translated to amino acids using IgBLASTn. The antibody amino-acid sequence was aligned using IgBLASTp. The germline databases for human patients used ImMunoGeneTics (IMGT) germline sequences derived from Lefranc et al. (1999). For each antibody, usually, multiple germline sequences can be obtained and only the single sequence showing the highest confidence score for the alignment was chosen. Pre-training Data Processing We downloaded OAS Oct 2021 version from its website and removed duplicate sequences. To avoid data leakage, we cluster sequences based on the CDR3 sequence and filter each cluster by 70% identity over the whole sequence using Linclust (Steinegger & Söding, 2018). Then, we shuffle the dataset and split it into 100k-size chunks. The last chunk is used as the validation set. The dataset size is 20,245,249 and 45,249 are used for validation. A.3 ATUE DETAILS We summarize the tasks used in ATUE in Table 5 and discuss each task in detail in this section. Antigen Binding Accurate antigen-binding prediction approaches could allow significantly more efficient antibody discovery with higher affinity. Machine learning methods have already achieved some success in antibody binding capacity optimization. We collect the antigen-binding data from Mason et al. (2021) and follow the training/validation/test split of 15,128/3,242/3,242. The original dataset only has CDR3 fragments, and we extend them to the full antibody sequences. For crossvalidation, we split the dataset by antibody sequences to ensure that no antibody sequences overlap between 90% training and 10% validation. Paratope Prediction Paratope is the antibody residues involved in antigen binding. The ability to accurately map the paratope can provide detailed knowledge about the binding mechanism and accelerate antibody discovery. 1D sequence-based deep learning methods have been employed for paratope prediction. The paratope data is collected from Liberis et al. (2018) with 1,662 CDR segments on 277 antibodies. Each antibody contains three CDR fragments (CDR1, CDR2 and CDR3) in the heavy chain and three CDR fragments in the light chain. We also search the full sequence for each antibody and use the whole sequence as input. For cross-validation, we split the dataset by antibody sequences to ensure that no antibody sequences overlap between 90% training and 10% validation. B Cell Analysis We formulate a 6-category classification task for B cell maturation analysis, which includes {immature, transitional, mature, memory IgD+, memory IgD-, plasmacytes,}. The analysis of B cell maturation plays an important role in understanding the mechanisms underlying B cell responses in the immune system Ghraichy et al. (2021); Meffre et al. (2000). The order of B cell type follows the evolutionary process in the immune system, from an immature state to a transitional state, and finally becomes a memory B cell. Both memory IgD- and IgD+ belong to memory B cells with different isotypes, and they have a high affinity to foreign antigens. Among the other categories, the Plasmacytes PC sequences also have some affinity ability. It is widely reported that changes in antibody sequence patterns correlate with B-cell maturation. Therefore, we use this task to evaluate the representation learning capacity of the language model. We collect 88,094 sequences from Mroczek et al. (2014). They extracted from the peripheral blood of healthy adults and got six types of B cells with different maturity and antibody sequences. The distribution of various types of B cells in the dataset is shown in Table 6 Antibody Discovery Antibody discovery from B cell repertoire has been widely recognized as a novel trend to improve the efficiency of antibody discovery for diverse antigens (Weiner, 2015; Pedrioli & Oxenius, 2021). However, previous studies highly rely on expensive wet-lab experiments (Cao et al., 2020; Shiakolas et al., 2022). Deep learning-based methods have shown the potential capacity to help antibody discovery by reducing cost and increasing efficiency (Widrich et al., 2020; Wang et al., 2022). Here, we ask whether pre-trained models can benefit real-world problems and enable fast-track neutralization of SARS-CoV-2 antibody discovery. In the first step, we develop a sequence classifier to distinguish which antibody sequence from the numerous sequences is responsible for the recognition of SARS-CoV-2. This task is highly challenging since we can hardly get the sequence-level disease label that indicates whether the antibody sequence is related to the disease. Thus, we follow the practice of Roskin et al. (2020); Zaslavsky et al. (2022) to use the individual label as the rough sequence label and train a sequencelevel predictor. Then, with the help of a sequence-level predictor, we can give each sequence a most likely label to help antibody discovery, whose accuracy has been verified by the excellent results on individual prediction, which may accelerate the discovery of new antibody sequences. We follow the condition of Kim et al. (2021) to filter SARS-CoV-2 antibody data from the OAS database. The basic condition is ‘Chain = heavy; Isotype = IGHG; BSource = PBMC; Species = human; Vaccine = None’. We further add the condition of ‘Unique Sequences >= 10000’. For health/SARS we set the ‘Disease’ field to ‘None’, ‘SARS-CoV-2’. Then we obtain 87/133 patient profiles for each type. To make a balanced dataset, we limit the size of the health profile and mix up the healthy ones and the ones with the SARS-CoV-2. For cross-validation, we randomly split the dataset by profiles 10 times: 90% for training and 10% for validation. We further select sequences with top100 redundancy to make the positive labels more accurate. A.4 QUANTITATIVE ANALYSIS OF ATUE TASK SPECIFICITY It is important to include statistical significance tests relative to the antibody-specific features in antibody functional tasks we proposed in the ATUE benchmark. According to the evolution process shown in Figure 7, antibody evolution specificity distinct from that of proteins can be defined with two main features: (i) ancestor germlines; (ii) the mutated amino acids of germlines. We implemented statistical significance tests of (i) ancestor germlines subtype usage; (ii) the number of mutated amino acids in antibodies against different labels of downstream tasks in ATUE to quantitatively assess the "Task specificity". The analysis is now summarized in Table 7. Generally, it is clearly shown that ATUE benchmark comprises antibody tasks showing different scales of antibody specificity for later modeling analysis. Moreover, they are used for statistical analysis of task specificity and pre-training model objectives in our study. Antigen Binding In the Antigen binding dataset, both antibody binding and none antigen-binding sequences share the same germline subtype sequence (IGHV3.1) (Figure 8A) as well as the same number of germline mutations 8B). Therefore, None of two antibody-specific features show significant distribution differences between data with different labels, demonstrating antigen binding is a task with low antibody specificity. Paratope Prediction For the paratope prediction task, we first evaluate the germline subtype distribution difference between sequences with different numbers of binding sites (Figure 9A). Kruskal Wallis test showed a p-value of 0.296 suggesting germline subtype usage is not statistically significant. Also, we find the binding sites can be significantly mapped with more germline mutations than the non-binding sites, which is consistent with the knowledge of antibody specificity definition (Figure 9B). One out of two antibody-specific features shows significant distribution differences between data with different labels. Therefore, we define this task as a medium specificity task. B Cell Analysis As shown in Figure 10, the distribution of the germline usage as well as the number of germline mutations are significantly different between antibodies in B cells with different developmental stages. This observation is highly consistent with previous studies Mroczek et al. (2014); Ghraichy et al. (2021). Since both of the antibody-specific features show significant distribution differences, this task is defined as a high-specificity task. SARS Antibody Discovery Antibodies in SARS patients and healthy ones show a significant difference in their germline subtype usage and the number of germline mutations (Figure 11). This observation is highly consistent with previous studies showing SARS antibody convergent among patients Galson et al. (2020). Since both of the antibody-specific features are highly significant, this task is defined as a high-specificity task. A.5 MODEL TRAINING DETAILS Antibody can be represented as A = {a1, a2, · · · , am} and the germline of individual antibody can be represented as G = {g1, g2, · · · , gn}, where m and n are the lengths. Each token ai or gj in the sequence is called a residue that belongs to the amino acid set A. A includes 20 common amino acids with a residue ‘X’ that indicates the residue is unknown (mostly in the germline). Typically, antibody PLMs are trained with basic mask language modeling objective lMLM on the antibody sequences S = A = {a1, · · · , am, }. A.5.1 EVOLUTION-AWARE PRETRAINING In order to incorporate the evolutionary information into the pre-training, we pair the antibody sequence A with its germline G and concatenate them into a long sequence with a special token ‘[SEP]’ as the delimiter: S = {s1, · · · , sm+n+1} = {a1, · · · , am, [SEP], g1, · · · , gn}. Thus, we optimize the MLM objective on the long sequence S: lMLM = − 1 |M | ∑ i∈M log p(si|S\M ), (1) where M is the index set of masked tokens. It helps the model learn the basic residue distribution for antibody sequences. Besides, it can also capture the interaction between residues of the antibody and its germline. Ancestor Germline Prediction The ancestor relationship between the antibody and its germline determines the shared biological functions obtained in the evolution. Antibody sequences with similar residues evolved from different germline sequences may have different biological functions. When stimulated by a foreign antigen, the common ancestor germline evolves to various antibody sequences. Similar antibody sequences may have different germline sequences, which will affect their biological functions. Thus, the aim of this task is to determine whether the antibody has an evolutionary relationship with the given germline. During training, we substitute the paired germline G with random germline G′ = {g1, · · · , gn} in the batch via a probability p = 0.3. The new sequence is denoted as S′ = {a1, · · · , am, [SEP], g′1, · · · , g′n} and the training loss can be described as: la = − log p(y|S′), (2) where y ∈ {0, 1} indicate whether the noisy germline G′ is the ancestor of the antibody S. It can help the model to distinguish the ancestor germline of the antibody by capturing the shared features. Mutation Position Prediction The somatic hypermutations on the germline further give progeny antibodies the specificity of binding with the specific antigen. In order to model this specificity, this task focuses on predicting the mutation positions and the residues mutated. Specifically, for each token gj in the germline G, the target is to predict a label yj ∈ {0, 1} to indicate whether this token has been mutated. For the antibody sequence S, we mask the mutation position and predict these tokens. The objective can be formalized as: lm = − 1 n ∑ j∈{1,··· ,n} log p(yj |S\M ′)− 1 |M | ∑ i∈M ′ log p(ai|S\M ′). (3) Here, M ′ is the ground-truth mutation position and we mask these tokens on the antibody sequence. This task is more difficult than MLM which equally masks tokens in the L, because the tokens on the mutation position of A get less information from the germline, compared with other shared residues between the antibody and the germline. By optimizing this objective, the model learns to capture the specificity obtained from the somatic hypermutation in the evolutionary process. A.5.2 IMPLEMENTATION DETAILS We use the base Transformer architecture (Vaswani et al., 2017) with 12 layers, 12 heads, and 768 hidden states. The total parameters are 86M. We use Adam optimizer (Kingma & Ba, 2015) with a maximum learning rate of 2e-4 and a warm-up step of 24,000. The maximum length is set to 400 since most antibody sequences are shorter than 180. We first pre-train our model with the MLM objective. During the pre-training, 15% tokens are randomly selected with 80% masked, 10% replaced, and 10% kept. Then we conduct further pre-training on two antibody-related tasks with a smaller learning rate of 1e-5. For each task in ATUE, we finetune the model with supervised data. We follow the standard split of Antigen Binding Prediction. For other tasks that do not provide a standard split, we conduct 10-cross validation and report the average results. Since our pre-training model learns the representation of the antibody sequence, we expand the CDR fragment to the full antibody by searching the biological database for therapeutic antibody engineering tasks. For finetuning, we limit the max epochs to 30 and use the Adam optimizer with a max learning rate of 3e-5. We use the mean representation of 12 layers as the sequence representation. The model is trained with 108,000 steps and gets a 0.9606 token accuracy on the MLM task. It takes further steps for AGP and MPP. The model quickly converges for AGP and gets a 0.99 accuracy on the ancestor germline prediction because more than 80% residues are shared between the antibody and its germline. For MPP, it can predict the mutation position with the accuracy of 1.000 and obtains a 0.442 accuracy in the mutation position (EATLM w/o AGP). It means that the model can easily find the mutation positions by the self-attention between the antibody and germline, but it is still difficult to predict which residues this position will mutate to. We assume it is because the ancestor germline can undergo different somatic hypermutations and get various progeny antibodies, resulting in different valid mutations at the same position. We also compare this mutation accuracy with the model without MPP, which is only trained with MLM on the concatenation of the antibody and its germline. With a high prediction accuracy of 0.889 on all positions, it achieves only a 0.031 accuracy on the mutations. It implies that the masking among all positions on the sequence can do accurate predictions of the shared residues but hardly capture the mutation information. We also conduct AGP and MPP to finetune the baseline model AntiBERT. The pre-training results are shown in Table 8. We can find that without the concatenation of the antibody and its germline, it is difficult to predict the ancestor relationship. It also underperforms than EATLM in MPP. Negative sampling ratio We have tried the ratio of 0.1/0.3/0.5/0.75 and found that this ratio has little influence on performance and convergence speed. As we discussed above, the model can quickly converge for AGP and get an accuracy of 0.99. Finetuned Protein Language Models and Larger Architecture We pre-train our method with a larger architecture and compare it with ESM-1b, which also has 650M parameters. We also further pre-trained the ESMs to transfer to the antibody field. After that, we evaluate them on the antigen binding and paratope prediction task. The results are shown in Table 9. The result shows that the larger architecture does show an advantage in terms of performance improvement. For antigen binding, ESM-1b has better performance than ESM-1. However, in paratope prediction, it performs worse. In addition, for ESM, fine-tuning the antibody dataset may cause the overfitting problem, leading to a decrease in the performance of all three tasks. A.6 LIMITATION ABOUT EATLM First, EATLM doesn’t use any 3D structure information during pre-training. As a special subgroup of proteins, antibody structures provide much more information such as geometry than sequences. In the future, recruiting structure information for antibody pre-training may be able to improve the results. However, the data scale available for antibody structure is dramatically less than that of antibody sequences. The largest dataset of antibody structures only contains thousands of 3D high-resolution structures, while the number of antibody sequences is in billions. Using structure prediction methods like AlphaFold may help to bridge the gap between sequences and structures. Second, EATLM requires germline as input during downstream tasks, this will slow down the prediction speed. A.7 NEW SARS BINDER DISCOVERY The main challenge for disease diagnosis is to distinguish the disease-related antibodies from millions of antibody sequences in the individual profile, as stated in Section A.3. Here, with the help of a sequence-level predictor, we can give each sequence a most likely label to help antibody discovery, whose accuracy has been verified by the excellent results on individual prediction, which may accelerate the discovery of new antibody sequences. SARS Sequence-level Predictor We first train a sequence-level predictor for SARS-CoV-2. The results are shown in Table 10. Compared with Figure 4 in the main text, we find that good results in the sequence-level predictor do not necessarily mean good results in the antibody discovery. It can be mainly affected by the noisy label of the sequence level. Figure out SARS Binders As shown in Table 3 in the main body, we find 2 true SARS binders and 9 potential binders with the help of EATLM. Specifically, we first use our sequence-level predictor to get a probability score for each sequence in the SARS dataset. Then we select the sequence with a high-ranked score (the probability > 0.5) and compare them with the public Cov-AbDab database Raybould et al. (2021) 1, which contains data on published/patented antibodies known to bind to SARS-CoV-2 (Raybould et al., 2021). Since the CDR3 fragment in the heavy chain is the most relevant to the binding of antibody and antigen, we calculate the edit distance between the CDR3 fragments in heavy chains (CDR-H3) with those of the known binder and use a threshold of 85% similarity as the sequence identity. 85% Hamming distance for B cell antibody sequence clustering (identify similar B cell antibody sequences responding to the same antigen/epitope) was previously suggested in this paper (Gupta et al., 2017). This method then was widely used for B cell antibody repertoire analysis in different studies (Montague et al., 2021; Wang et al., 2022). SARS Binder Analysis To provide a more intuitive analysis of the similarity between our predicted antibody and true SARS-CoV-2 binders, we investigate the 3D structure of the true binding antibodies and the mutation site of our predicted sequence on the corresponding structure. High resolution structure of true binding antibody #3 in Table 3 with SARS-CoV-2 are shown in Figure 13 (PDB code: 7N62). The interaction interface between the antibodies and SARS-CoV-2 spike/RBD is shown in Figure 3 in the main body. CDR-H3 were shown in orange. Only one single residue highlighted in red is different between the predicted binder and the true binder. Obviously, these different residues don’t localize to the direct binding site and CDR-H3 founding core, suggesting the sequence difference likely will not affect antibody-virus interaction. Furthermore, we found the epitopes of the 11 identified SARS-CoV-2 antibodies cover a wide range of different structures from traditional RBD domain to novel non-RBD epitopes like S2 and NTD as shown in Table 3. This result shows our method enables diverse-epitope antibody discovery. Threshold Total Hit Hit rate (%) Probability Threshold Sensitivity In order to investigate the influence of the threshold used to determine the potential binders, we try different thresholds in Table 11. Here, the probability threshold means that if the sequence predictor gives a probability higher than the threshold for one sequence, it will be viewed as a potential binder. If the predicted binder has a sequence similarity higher than 1http://opig.stats.ox.ac.uk/webapps/covabdab/ 85% with the existing binders in Cov-AbDab, we view it as one hit. As the threshold score increases, the hit rate corresponding increases from 0.528% to 0.562%, indicating that our model may enable priority selection of SARS-CoV-2 antibodies and reduce experimental costs. Sequence Similarity Sensitivity In previous work, two antibodies with CDR-H3 similarity over 85% can be viewed as similar and have a high probability to share the same functionality. And here we also check the influence on the binder matching of different thresholds of similarity. The results are shown in Figure 14. Here, we fix the probability threshold as 0.5. As we can see, the baselines have similar trends in all thresholds. If we relax the threshold, there will be more matching sequences. However, the predictors will have less advantage over the random order, which indicates that the ranking is less important if we relax the similarity threshold. The Potential of New Binder Discovery During the training of our sequence-level predictor, we have no reliable ground-truth labels, which means that the model has never known which sequences can bind to SARS in a real-world scenario. However, the model can learn from the noisy data and rank the real SARS binders with high probabilities. Sequence identity of 1 means that the CDR-H3 fragment can be directly found in the Cov-AbDab database, which implies that the sequences have been verified by wet laboratory testing. The other sequences with an identity over 90% are thought to have a similar binding performance to existing binders, indicating that they are promising SARS binders that can help the discovery of therapeutic antibodies for SARS-CoV-2. A.8 EXTENTED STUDY FOR DISEASE DIAGNOSIS It would be interesting to see whether our sequence classifier can be used for other applications, such as disease diagnosis. Each human is estimated to maintain about 108 − 1010 distinct antibody sequences, constructing an informative encyclopedia recording the past and present health and disease. Interpreting the pattern of the sequences has already proved useful in disease diagnosis and allows us to assess many infectious diseases without expensive laboratory testing. However, it is difficult to distinguish which antibody sequence from the numerous sequences is responsible for the recognition of the specific antigen, which hinders the discovery of the antibody for diseases (Zaslavsky et al., 2022; Lu et al., 2018; Greiff et al., 2020). Benefiting from the recent high-throughput sequencing, we can obtain millions of antibody sequences from the individual human. At the same time, we can get a disease label that indicates whether the human is infected by the disease. The main challenge is that we can hardly get the sequence-level disease label that indicates whether the antibody sequence is related to the disease. Thus, we follow the practice of Roskin et al. (2020) to use the individual label as the rough sequence label and train a sequence-level predictor. Then we use this predictor to predict sequences of the individual profile and make the trimmed mean score as the individual score. We use the same data processing as Antibody Discovery stated in Section A.3. For health/SARS/HIV/Ebola/Allergy/SLE/MS, we set the ‘Disease’ field to ‘None’, ‘Ebola’, ‘Allgery’, ‘SLE’,‘MS’. Then we obtain 87/133/51/14/12/8/8 patient profiles for each type. We also do 10-cross validation and select sequences with high redundancy. Disease Classification We use all these disease profiles to build the Q7 classification task for disease diagnosis. Previous biological studies mainly use this multi-classification task for disease diagnosis Zaslavsky et al. (2022); Wang et al. (2022), highlighting the discriminatory power among different diseases is important for disease diagnosis. The results are shown in Table 12. We found both PPLM and PALM show comparable results as the randomly initialized model, suggesting the finetuning part plays a more important role and the pre-trained language model cannot help this task. Sequence-level Predictor for Various Disease As before, we train a sequence-level predictor for each disease. The results are shown in Table 13. Compared with Table 4 in the main text, we find that good results in the sequence-level predictor do not necessarily mean good results in the individual-level predictor. It is mainly due to the trimmed mean we use to get individual-level results, which is a central estimate that is robust to noise labels. Overall, our model has comparable results to other models in terms of sequence prediction with noisy labels and has better results for individual diagnosis. Individual-level Predictor for Various Disease It is observed our evolution-aware EATLM performs the best in the individual-level classifier to determine whether the patient suffering from SARS. Besides, PALMs significantly outperform PPLMs. The results are shown in Table 14.
1. What is the focus of the paper in terms of protein language models? 2. What are the strengths of the paper regarding its clarity, quality, novelty, and reproducibility? 3. What are the weaknesses of the paper regarding its contributions and significance? 4. How does the reviewer assess the importance of the problem addressed by the paper? 5. Are there any suggestions for improving the contribution of the paper?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper provides a comprehensive analysis of Pre-trained Protein Language Models (PPLM) and specific Pre-trained Antibody Language Models on the predictions of different antibody tasks and introduces a new pre-trained method that better utilizes antibody-specific information to achieve a pre-trained antibody language model. Strengths And Weaknesses Strength: This paper is really well-written and easy to follow. The authors provide essential biological and technical backgrounds and clearly state the status, problems, methods, and empirical results. The problem it tries to solve is important, and the authors provide great insights into this problem. The provided benchmark could be helpful for future studies. Weaknesses: Besides the analysis and insights, the contribution may not be significant enough. From the modeling perspective, this paper just introduced two new training targets besides MLM that leads to slightly better performance compared to baselines such as Ablang-H. From the benchmark perspective, providing new datasets or incorporating more existing datasets would make this contribution much more significant. Clarity, Quality, Novelty And Reproducibility Clarity: Great Quality: Good Novelty: Good Reproducibility: Easy to reproduce.
ICLR
Title On Pre-training Language Model for Antibody Abstract Antibodies are vital proteins offering robust protection for the human body from pathogens. The development of general protein and antibody-specific pre-trained language models both facilitate antibody prediction tasks. However, there have been limited studies that comprehensively explore the representation capability of distinct pre-trained language models on different antibody tasks. To investigate the problem, we aim to answer several key questions in this paper, such as how pre-trained language models perform in antibody tasks with different specificity and how introducing specific biological mechanisms to the pre-training process can benefit the model. Additionally, we evaluate if the learned antibody pre-trained representations can be applied to real-world antibody problems, like drug discovery and immune process understanding. Previously, no benchmark available largely hindered the study to answer these questions. To aid in our investigation, we provide an AnTibody Understanding Evaluation (ATUE) benchmark. We comprehensively evaluate the performance of protein pre-trained language models by empirical study along with conclusions and new insights. Our ATUE and code are released at https://github.com/dqwang122/EATLM. 1 INTRODUCTION Antibodies are a type of protein that is useful for diagnosing and treating a variety of diseases, including SARS-CoV-2 (Zhu et al., 2022). It is crucial to understand the information contained in antibody sequences to develop effective therapeutic antibodies and advance our understanding of the immune system (Greiff et al., 2020; Lu et al., 2018; Yermanos et al., 2018). Recent advances in general Pre-trained Protein Language Models (PPLM) and specific Pre-trained Antibody Language Models (PALM) offer new possibilities for antibody-related tasks. For example, PPLMs have shown promising results in transferring learned representations to antibody tasks (Kim et al., 2021; Zaslavsky et al., 2022) and PALMs have been found to improve model performance in antibody paratope predictions (Leem et al., 2022). Despite these successes, few studies have thoroughly examined the capability of different pre-trained language models (e.g. general PPLMs and specific PALMs) on various antibody tasks, which hinders the development of better architectures for antibody discovery and modification. To investigate this problem, we compared the performance of the pre-trained protein language model ESM (Rives et al., 2021), the pre-trained antibody language model AntiBERT (Leem et al., 2021), a pre-trained antibody language model EATLM by introducing antibody specific mechanisms, and a model trained from scratch (No Pretrain) on three antibody tasks with varying levels of specificity. The result is illustrated in Figure 1. Here, ∗Work was done when Danqing Wang was in Bytedance Research. specificity refers to the antibody’s unique evolution processes distinct from that of protein to obtain functionality, such as the ability to bind antigen (The definition is discussed in detail in §3.1). We can see that while ESM performs well in tasks that are less antibody specific, its performance decreases significantly in tasks that are more specific. Additionally, AntiBERT does not demonstrate a clear advantage over the non-pre-trained model in the high-specificity task. These results highlight the limitations of current pre-training language models for antibody-related studies. Using general PPLM representations directly may harm performance, and current pre-training strategies for PALMs may not fit the specific biological functions of antibodies. This emphasizes the need for a comprehensive model design guideline for various antibody tasks. Our main focus is to address the following questions: (I) How well will pre-trained language models perform on antibody tasks with varying specificity? Addressing of the question is mainly hindered by two challenges: the lack of a reliable antibodyspecific benchmark for performance evaluation and comprehensive studies of current PPLMs and PALMs. (II) Can incorporating biological mechanisms, specifically antibody-specific evolution, into the pre-training process provide additional benefits for antibody representation learning? This idea has been explored in several computational biology studies, which have demonstrated promising results in antibody-related tasks such as disease diagnosis and therapeutic antibody development (Yermanos et al., 2018; Miho et al., 2019). Then, it is interesting to know whether antibody representation learning can benefit from the incorporation of antibody-specific evolution information. (III) Are the pre-trained antibody representations useful in practical applications, such as drug discovery and immune process understanding? Antibodies are critical in drug development, and it is essential to determine whether pre-training representations can be beneficial for biologists to comprehend antibody functions or develop drugs. To investigate these questions, we first propose antibody study benchmark AnTibody Understanding Evaluation (ATUE). This is the first antibody benchmark with four real-world supervised tasks related to therapeutic antibody engineering, B cell analysis, and antibody discovery. These tasks cover a range of specificity levels to evaluate models on different aspects of antibody biological functions. Based on ATUE, we conduct empirical studies to investigate the representation ability of distinct pre-trained language models. To explore the impact of incorporating specific biological mechanisms in antibody pre-training, two objectives are introduced to tailor masked language modeling for evolution: (1) Ancestor germline prediction guides the model to discriminate the evolutionary relationship between antibody and ancestral sequences. (2) Mutation position prediction mimics hypermutation during the evolution. These methods are used to investigate the representation ability of antibody evolution-tailored language model. Finally, we take a close look at the SARS-CoV-2 antibody discovery to investigate the pre-trained representation under a real-world scenario. We have three main contributions in this study: • We created the first comprehensive antibody benchmark called ATUE to help with antibody application studies, which includes four real-world supervised tasks ranging from low to high specificity. We also introduce two new objectives for antibody pretraining that incorporate antibody-specific evolutionary information. • We made key observations for providing guidelines for better antibody representation. Firstly, PPLMs perform well on antibody tasks that have a high relationship with structure, but they perform poorly on tasks with high antibody specificity. Secondly, in most cases, PALMs perform as well as or even better than PPLMs with less pre-training data. Thirdly, PALMs can be improved by incorporating the evolution process, but the evolution information from MSAs does not always benefit antibody tasks. • We identified 11 potential SARS-CoV-2 binders that have highly identical sequences to existing therapeutic antibodies that bind to the virus, which could accelerate real-world antibody discovery. 2 RELATED WORK Our work focuses on researching the effectiveness of protein and pre-trained antibody language models for antibody-specific tasks. Below we review the representative existing methods. We list the details in Table 1. Pretrained Protein Language Models (PPLMs) There is an increasing interest in exploring largescale language models using protein sequences (Rao et al., 2019; Madani et al., 2020; Meier et al., 2021; Chen et al., 2022). These models have been shown to achieve state-of-art capacity in predicting protein structure and function. ProtTrans (Elnaggar et al., 2021) and ESM-1b (Rives et al., 2021) take individual protein sequences as input and adopt Transformer language models for pre-training, demonstrating that self-supervision is a promising paradigm for protein secondary structure, contact, homology predictions, and function prediction. To extract evolutionary information from protein sequences, Rao et al. (2021) proposed the MSA-transformer/MSA-1b model utilizing multiple sequence alignment (MSA) instead of a single query sequence as input. This model is superior to ESM1b for structure prediction, demonstrating evolution information can benefit protein representation learning. Despite the progress in the field, few studies reported their results on antibody tasks. Pretrained Antibody Language Models (PALMs) Encouraged by the success of PLMs in protein representation learning, series work seeks to learn antibody representations based on sequences of antibodies. AntiBERTy (Ruffolo et al., 2021) proposed the first antibody-specific language model, exploring a Transformer trained on 558M natural antibody sequences in the OAS database. Olsen et al. (2022b) train two language models for antibodies: A heavy chain version Ablang-H and a light chain version Ablang-L. The study reported transfer learning results on restoring missing residues of antibody sequences, which is a task similar to pre-training objectives. AntiBERTa (Leem et al., 2021) train the antibody language model on OAS and finetuning AntiBERTa for paratope position prediction, achieving state-of-the-art performance. Recently, Li et al. (2022) proposed an antibodyspecific language model and explored its performance in SARS-CoV-2 antigen binding, showing context-dependent representations of antibody sequences benefit binding prediction. 3 FRAMEWORK In this section, we first give a brief introduction to the antibody and its specific evolution. Then we propose the first antibody-specific benchmark (ATUE) composed of four tasks with different specificities. Finally, we implement several PPLMs and PALMs baselines and design an evolutionaware PALM to incorporate the biological mechanism into the pre-training process. 3.1 BACKGROUND Antibody Antibodies are vital proteins generated by the immune system to remove harmful foreign pathogens in the human body. they can specifically bind to antigens on the pathogen and recognize it. Antibodies are composed of two identical heavy chains and two identical light chains and form a large Y-shaped structure. Two tips on it contain highly variable loops, called Complementarity Determining Regions (CDR), which function for antigen binding. Antibody Specific Evolution Notably, the antibody evolution process is significantly different from that of proteins, providing a good opportunity for us to investigate the impact of general PPLMs on specific subdomains. To perform its protective function, the antibody sequence undergoes evolution selection to search for optimal patterns that can specifically recognize pathogens (Honjo & Habu, 1985). Deciphering the information stored in antibody sequences may benefit our understanding of disease and accelerate therapeutic antibody development (Greiff et al., 2020; Lu et al., 2018; Yermanos et al., 2018). During evolution, the random recombination of V/D/J-gene segments provides the Task Specificity HighLow initial diversity for the ancestor sequence (germline). Upon exposure to a pathogen, this sequence undergoes frequent sequence mutations to search for progeny antibody sequences with optimal binding specificity. In other words, gene recombination provides millions of germlines in the human body, and the germlines further mutate into a huge number of progeny antibodies. Thus, the ancestor relationship between an antibody and its corresponding germline as well as the mutation it undergoes together determine the unique biological functions. In brief, the evolutionary relationships between antibodies arise to gain new functions such as antigen binding. It is significantly different from that of proteins, which are to maintain certain functions across different organisms. We further illustrate this process in Figure 7 in §A.1. Unsupervised Antibody Corpus To obtain the evolutionary information of antibody sequences, we utilize Observed Antibody Space (OAS), a database containing more than 1.5 billion natural antibody sequences (Kovaltsuk et al., 2018; Olsen et al., 2022a) The antibody sequences in the database have been precisely annotated with evolutionary and structural information, including the paired germline and CDR3 for each antibody. To pair the antibody with its germline used in the pretraining task, we used the annotated sequences provided in the OAS database. Further information on data processing can be found in §A.2. 3.2 ANTIBODY UNDERSTANDING EVALUATION (ATUE) We provide four biologically relevant downstream prediction tasks to serve as antibody benchmarks, covering four major application aspects for antibodies in the real world: therapeutic antibody engineering, disease diagnostics, antibody discovery, and B cell maturation analysis. The antibody specificity of these tasks ranges from low to high, offering scaled tasks with subdomain specificity for pre-trained language model evaluation. Detailed information is listed in Figure 2. All data are publicly open and used under the right license. For each task, we focus on the following aspects and leave the details in Appendix (§A.3 and §A.4): [Definition] The formal definition of the task and the understanding ability required. [Impact] The importance of the task in the biological area. [Dataset] The data source and size. [Specificity] Antibody’s specific evolution characteristics are different from general proteins. We use several classification metrics to evaluate the performance. Accuracy (ACC) calculates the ratio of correct predictions. Matthews Correlation Coefficient (MCC) is the coefficient between true and predicted values. F1 is the average weighted score of precision and recall. AUC is the area under the ROC curve, which shows the performance at all classification thresholds. Antigen Binding Prediction is a binary sequence classification task to determine whether the CDR region of the antibody can bind to the specific antigen. [Impact] A better understanding of the binding affinity between antibodies and antigens can accelerate the affinity optimization of therapeutic antibodies. [Dataset] We collect the antigen binding data from Mason et al. (2021) and follow the training/validation/test split of 15,128/3,242/3,242. [Specificity] Low. All the antibodies sequence in the dataset are derived from a single germline sequence indicating the task is not antibody-specific evolution-related. Paratope Prediction It is to identify binding positions on the antibody sequence, which is a sequence labeling task to predict a 0/1 label for each residue of CDR fragments. [Impact] The exploration of paratope (binding positions between antibody and antigen) can help to understand the binding mechanisms of therapeutic antibodies. [Dataset] The paratope data is collected from Liberis et al. (2018) with 1,662 CDR segments on 277 antibodies. [Specificity] This task is medium specificity related because only partial antibodies from the database are derived from evolution. B Cell Maturation Analysis It is a 6-category classification task to distinguish the maturation stage of B cell antibody sequences. Each sequence belongs to one of {immature, transitional, mature, plasmacytes, memory IgD+, memory IgD-}. It requires the model to learn a representation sensitive to different maturation states. [Impact] It benefits the understanding of the mechanism during immune evolution, which is a critical biological process in the immune system affecting the function and antigen specificity of antibodies (Ghraichy et al., 2021; Meffre et al., 2000). [Dataset] We collect 88,094 sequences from Mroczek et al. (2014) with 6 maturation stages. [Specificity] High. Antibody evolution is highly coupled with B cell maturation (Meffre et al., 2000). Antibody Discovery The task is a binary sequence classification task to distinguish which antibody is directly responsible for SARS-CoV-2 binding. The task is highly challenging from two aspects: (1) Less than 1% of antibodies from SARS-CoV-2 patients are directly responsible for virus binding. (2) It is hard to get a reliable sequence-level classifier using unreliable and noisy individual-level labels. [Impact] Antibody discovery from B cell repertoire has been widely recognized as a important approach to accelerate antibody discovery for diverse antigens (Weiner, 2015; Pedrioli & Oxenius, 2021), and achieved great success for SARS-CoV-2 antibody discovery (Kovaltsuk et al., 2018; Cao et al., 2020; Shiakolas et al., 2022). [Dataset] We collected antibody sequences from 133 SARS-CoV-2 patients and 87 health persons from OAS and followed the processing pipeline of Kim et al. (2021). Inspired Zaslavsky et al. (2022), we match the high-ranked sequences with the sequences in the CoVAbDab (Raybould et al., 2021) database, which have been proved to bind SARS-CoV-2 using wet-lab experiments. [Specificity] High. It is widely reported antibodies derived from the same disease such as SARS-CoV-2 share strong convergent germline signals (Galson et al., 2020). 3.3 EXPERIMENT SETUP Based on the antibody benchmark ATUE, we evaluate the performance of current pertaining language models in different specificity tasks. Furthermore, to investigate the benefit of introducing the biological mechanism, we incorporate evolution information as the extra pretraining objectives for PALMs and propose EATLM. The detailed description of the objective and the implementation can be found in §A.5 Current Pre-trained language models Existing antibody and protein language models are summarized in Table 1. Since the code and pre-training data of AntiBERTa are not released, we train a BERT model named AntiBERT on the full OAS database following the same setting as the original study. MSA-1b (Rao et al., 2021) takes protein-specific evolutionary sequences (Multiple Sequence Alignment, MSA) as the input. Because it is hard to align sequences between antibodies due to the diversity of CDR3, we take the germline and create pseudo-MSAs with depth 2. We add a linear layer on top of the language models and finetune the whole model on the downstream tasks. Evolution-aware antibody pretraining method To incorporate the biological mechanism into the pre-training, we propose a model with evolution information: Antibody EvoluTion-aware pretraining Language Model. The antibody can be represented as A and the germline of the individual antibody can be represented as G. Typically, PALMs are trained with basic masked language modeling (MLM). Based on it, we design another two pre-training objectives to simulate the biological mechanism of antibody evolution. The evolutionary relationship between the antibody and its germline includes two folds: (i) Whether the antibody and the germline have an evolutionary relationship. (ii) How to mutate residues from the germline to get the specific antibody. Two evolution-related objectives are introduced to solve the above questions: Ancestor Germline Prediction (AGP) and Mutation Position Prediction (MPP). For ancestor germline prediction, we substitute the paired germline G with random germline G′ in the batch via a probability p. The model is made to distinguish the ancestor germline of the antibody by capturing the shared features. To predict mutation position, for each token in the germline G, the objective is to predict a 0/1 label for each token to indicate whether this token has been mutated. For the antibody sequence S, we mask the mutation position and predict these tokens. Hyper-parameters We use the base Transformer architecture (Vaswani et al., 2017) with 12 layers, 12 heads, and 768 hidden states. For each task in ATUE, we finetune the model with supervised data. We follow the standard split of Antigen Binding Prediction. For other tasks that do not provide a standard split, we use a 10-fold cross-validation. Since our pre-training model learns the representation of the antibody sequence, we expand the CDR fragment to the full antibody by searching the biological database for therapeutic antibody engineering tasks. We also use the same Transformer architecture to train from scratch for each downstream task. This model is indicated as non-pretrain since it is not pre-trained on a protein/antibody database. Reproduction We conduct 10-fold validation on paratope prediction, B cell maturation analysis, and antibody discovery. For antigen binding prediction, we conduct three repetitive experiments with different random seeds. We report the average results and the standard derivation. 4 RESULTS AND ANALYSIS In this section, we present the experimental results and analysis for the representation capability of existing PPLMs, PALMs, and the EATLM method with evolutionary incorporation, using ATUE benchmark. Additionally, we summarize our observations aiming to address the problems highlighted in the introduction. 4.1 MAIN RESULTS Antigen binding We evaluate the performance PLMs models for antibody binding and paratope prediction, which are less antibody specific. The results in Table 2 indicate that PPLMs and PALMs perform similarly on these tasks, suggesting that PALMs can learn comparable general protein representations to PPLMs. Among different PALMs, Ablang-H outperforms Ablang-L and AntiBERT. It indicates that separate training for heavy and light chain sequences is beneficial for these tasks. Moreover, the introduction of AGP and MPP provides improvement over AUC and F1 metrics. Paratope prediction The results presented in Table 2 demonstrate that for paratope prediction, both PPLMs and PALMs can significantly boost the prediction accuracy over the model with pretraining. However, PALMs do not exhibit a significant advantage over PPLMs. EATLM outperforms other models, particularly in terms of F1 and MCC, while other models exhibit high recall and low precision, indicating that they tend to predict more residues as binding sites. With the incorporation of mutation residue prediction, EATLM can focus on the specific mutated positions adapted to bind with antigen. Among the two PPLMs, MSA-1b outperforms ESM-1 on F1 and MCC, which benefits from the structure information learning from MSAs. B Cell Analysis In this task, we investigate the ability of different pre-trained language models to distinguish between various B cell mature states during evolution. The findings, as demonstrated in Table 2, indicate that PPLMs are not effective in discerning minor differences between B cell sequences, resulting in mediocre results. Both ESM-1 and MSA-1b perform significantly worse than randomly initialized models. MSA-1b, in particular, performs poorly among all pre-trained language models, implying that representations that excel in protein structure prediction may be detrimental to antibody-specific tasks. Conversely, all PALMs show promising results for the task. This may be due to the fact that the general protein has little correlation with the specific antibody mature process and cannot capture this feature during protein pretraining. Our EATLM significantly outperforms the other PALMs. This is because our model can effectively capture the evolution feature and better distinguish between B cells at different stages of maturation by explicitly modeling the biological mechanism. We conduct further analysis to figure out whether our EATLM successfully captures sequence characteristics during the evolutionary process. We explore the probabilities of predicting antibodies in class i to class j. The results shown in Figure 3 reveal EATLM can easily classify the immature B cell with an accuracy of 0.9. It is consistent with the biological study that CDR3 sequence length in immature B cells is significantly shorter than that of the other mature B cells (Ghraichy et al., 2021). From the diagonal, we can figure out that our model tends to mistake the B cell sequences with their previous or post-evolutionary stage, consistent with the biological process. Antibody Discovery We investigated the potential of PPLMs and PALMs in aiding the discovery of antigen-specific antibodies for real-world problems. To achieve this, we followed a two-step process similar to Zaslavsky et al. (2022). Firstly, we created a sequence classifier to differentiate SARS-CoV-2 antibodies using noisy individual-level labels. Secondly, we compared the highly-ranked sequences with true binding sequences in the CoV-AbDab (Raybould et al., 2021) database to determine if there are similarities. We used a 90% sequence identity threshold to determine the likelihood of biological functionality similar to the existing binders. The experimental design for this is outlined in §A.7. Figure 4 shows the cumulative sum of matched sequences in the order of predicted probabilities by different pre-trained language models for the SARSCoV-2 specific antibody discovery task. We can ob- serve that PALMs outperform PPLMs in identifying potential binders, as the sequences predicted with high probability by PALMs match better with the existing binders. Moreover, among PALMs, EATLM significantly outperforms other models, with the red line indicating its performance. Initially, EATLM is the quickest method to find potential binders, but it loses to Ablang-H, and eventually overtakes again and converges. This suggests that EATLM is the most effective method for identifying all potential binders in this dataset. Furthermore, we list 11 potential binder sequences discovered by EATLM in Table 3. Without supervised labels, EATLM gives a high probability of 2 SARS-CoV-2 existing binding antibodies. Besides, EATLM suggests 9 potential sequences with high CDR-H3 sequence identity, indicating the potential for diverse-epitope antibody discovery and selection. These results demonstrate the potential of EATLM in therapeutic antibody discovery. To validate whether the antibody sequences with 90% sequence identity can indeed bind the same target, we investigate the 3D structure of the true binding antibody. Table 4 shows only one single residue difference between the predicted binder and the existing binder, suggesting the predicted binders are highly possible to interact with SARS-CoV-2. No Predicted Binder Existing Binder Epitope Identity SARS‑CoV‑2 NTD 4.2 HOW DOES EVOLUTION PRETRAINING TASK INFLUENCE THE REPRESENTATION? To comprehend the reasons for the better performance of EATLM on antibody-related tasks, we conduct the analysis of the pre-trained representations. The objective of this analysis is to evaluate the effectiveness of the evolution-aware pre-training strategies from two perspectives: (1) Does the pre-trained representation of antibodies reflect their ancestor relationship? (2) Does the specificity of antibodies get captured by the evolution objective? Ancestor Gerlime Visualization We perform UMAP visualization analyses in Figure 5. First, we observe that antibodies evolved from the same germline are nicely clustered together (Figure 5a and 5b), indicating the learned embedding is encoded with germline information. Besides, sequences with similar scales of evolutionary distance tend to cluster together, and a clear gradation of evolutionary distance can be observed in Figure 5c and 5d. The visualization provides a sanity check for the ability of EATLM to extract the sequence information of antibodies. Accuracy of Mutation Position Based on the specific evolution process described in §3.1, we can find the mutation during the evolution process bring specificity to the antibody. Thus, we explore the model’s ability to predict mutated residue from the masked token, which can reflect the specificity feature the model captures. We find that although AntiBERT can predict with an accuracy of 0.889 on all positions, it fails on mutation positions with a 0.031 accuracy. In contrast, EATLM achieves an accuracy of 0.443 on mutation position, which indicates that the model captures the specificity information. Note that during the MPP training, we mask the mutation position on antibody sequences, which are different from its germline. Thus, the model cannot get the mutated residue from the germline directly. The only way is to learn the underlying mutation rules. The full results are shown in Table 8 in Appendix. 4.3 KEY OBSERVATIONS The performance of pre-trained language models is highly dependent on the specificity of the task. In tasks with low antibody-specificity, PPLMs perform similarly to PALMs, indicating that using general protein representations from PPLMs is an effective way to transfer learning in these tasks. On medium specificity tasks such as paratope prediction, PALMs show their advantage and outperform PPLMs. However, for tasks with high specificity, PPLMs have significantly lower performance, suggesting that general pre-trained protein models are insufficient for antibody-specific representation learning. Additionally, incorporating protein evolution information does not always benefit antibody tasks, especially those that require antibody evolution information, as shown by the 20% decrease in performance observed with MSA-1b compared to the model without pre-training. This finding is consistent with the biological understanding that the mechanism of antibody evolution is significantly different from that of proteins. Performa nce Incre ase (%) Task SpecificityLow Medium High 010‑10‑20 20 ‑30 No PretrainESM‑1 AntiBERTEATLM MSA‑1b Ablang‑HAblang‑L Figure 6: Performance summary of various pre-trained language models. Incorporation of biological evolution mechanism into PALM generally benefits antibody prediction tasks. The inclusion of evolution-related training objectives assists in identifying mutation positions on antibodies, which is a distinguishing feature from germline. Notably, the performance increase of EATLM in comparison to other PALMs is linked with the level of task specificity. The ablation study showed removing the evolution-related pretraining objectives leads to decreased performance, confirming their contribution to the prediction task. Further research in this direction is promising and could offer more in-depth insights. Antibody pre-trained representations are helpful for real-world drug discovery. By utilizing the language model, we predict the likelihood of each antibody binding with SARS-CoV-2. Despite lacking precise sequence-level labels, we successfully identify 11 promising antibody binders. 5 CONCLUSIONS AND LIMITATIONS In this paper, we conduct a detailed investigation into the effects of pre-trained protein and antibody language models on various antibody tasks. To facilitate research in the antibody and machine learning fields, we provide ATUE consisting of four important antibody tasks from four different biological categories with varying levels of antibody specificity. However, there are certain constraints to our research. Firstly, due to the scarcity of data, the diversity of tasks in our ATUE is limited. As more data becomes available, we anticipate expanding our benchmark to include a greater range of diseases and larger data sets. Additionally, we did not examine any 3D structure information during antibody pre-training. As antibody structures offer more information than just sequences, such as geometry, incorporating structural information in future studies may lead to improved results. ETHIC STATEMENT This research involving the use of pre-existing data and computational methods did not involve any human or animal subjects, and therefore, no ethical approval was required. The authors followed all applicable ethical standards and guidelines for data analysis and reporting. All data used in this study were obtained from publicly available sources, and proper citation and attribution have been given. The authors have made efforts to ensure that the research presented in this paper does not infringe upon any existing copyrights or intellectual property rights. ACKNOWLEDGEMENT We thank members of ByteDance Research for discussion, Zaixiang Zheng and Yi Zhou for useful writing suggestions. Hao Zhou is supported by Vanke Special Fund for Public Health and Health Discipline Development, Tsinghua University (NO.20221080053), Guoqiang Research Institute General Project, Tsinghua University (No. 2021GQG1012); A APPENDIX A.1 ANTIBODY SPECIFIC EVOLUTION Antibodies, composed of two identical heavy chains and two identical light chains, form a large Y-shaped structure, where the two tips are responsible for pathogens binding. Antibody evolution, described by sequence-sequence relationships between ancestor and progeny antibodies, reflects antibodies’ key antigen-binding function (Honjo & Habu, 1985). During antibody evolution (Figure 7), the initial diversity is encoded into the ancestor sequence through randomly recombination of V-, D- and J-gene segments. Upon exposure to a pathogen, the sequence undergoes frequent sequence mutations to search for progeny sequences with optimal binding specificity. Sequence evolution analysis has been employed by many computational biology studies and shows promising results in antibody-related tasks, such as disease diagnosis and therapeutic antibody development (Yermanos et al., 2018; Miho et al., 2019). Importantly, antibody evolution is significantly different from that of proteins. Antibodies only contain hundreds of thousands ancestor sequences so-called germline. To bind dozens of millions of diverse antigens, antibodies need to mutate from the ancestor sequences to gain new functions (Figure 7). Therefore, the non-conserved amino acids (mutated ones) plays important roles for structure and function. On the contrary, the conserved amino acids (not mutated) in proteins determine structure and function. During protein evolution, evolutionary pressure to maintain protein structure and functions leads to the conservation or co-evolution of residues located in structural folding core for binding interface. Diverse methods have been developed to extract the co-evolution information from conserved amino acids sequences for structure and function prediction, such as AlphaFold (Jumper et al., 2021). In brief (Figure 7), antibody evolution specificity distinct from that of proteins can be defined with two main features: (i) ancestor germlines; (ii) the mutated amino acids of germlines. Protein evolution Antibody evolution A.2 DATA PROCESSING DETAILS Pairing Antibody with Germline For germline annotation in the pre-training task, we used the annotated germline sequences provided in the OAS database (Kovaltsuk et al., 2018). For downstream benchmarks tasks like B-cell classification, therapeutic antibody engineering, and disease diagnosis, we completely followed the methods shown in the OAS database paper. IgBLAST, an immunoinformatic benchmarking tool for the analysis of B-cell antibody repertoires was used for germline annotation (Ye et al., 2013). The antibody nucleotide-containing FASTA file was aligned to germline and translated to amino acids using IgBLASTn. The antibody amino-acid sequence was aligned using IgBLASTp. The germline databases for human patients used ImMunoGeneTics (IMGT) germline sequences derived from Lefranc et al. (1999). For each antibody, usually, multiple germline sequences can be obtained and only the single sequence showing the highest confidence score for the alignment was chosen. Pre-training Data Processing We downloaded OAS Oct 2021 version from its website and removed duplicate sequences. To avoid data leakage, we cluster sequences based on the CDR3 sequence and filter each cluster by 70% identity over the whole sequence using Linclust (Steinegger & Söding, 2018). Then, we shuffle the dataset and split it into 100k-size chunks. The last chunk is used as the validation set. The dataset size is 20,245,249 and 45,249 are used for validation. A.3 ATUE DETAILS We summarize the tasks used in ATUE in Table 5 and discuss each task in detail in this section. Antigen Binding Accurate antigen-binding prediction approaches could allow significantly more efficient antibody discovery with higher affinity. Machine learning methods have already achieved some success in antibody binding capacity optimization. We collect the antigen-binding data from Mason et al. (2021) and follow the training/validation/test split of 15,128/3,242/3,242. The original dataset only has CDR3 fragments, and we extend them to the full antibody sequences. For crossvalidation, we split the dataset by antibody sequences to ensure that no antibody sequences overlap between 90% training and 10% validation. Paratope Prediction Paratope is the antibody residues involved in antigen binding. The ability to accurately map the paratope can provide detailed knowledge about the binding mechanism and accelerate antibody discovery. 1D sequence-based deep learning methods have been employed for paratope prediction. The paratope data is collected from Liberis et al. (2018) with 1,662 CDR segments on 277 antibodies. Each antibody contains three CDR fragments (CDR1, CDR2 and CDR3) in the heavy chain and three CDR fragments in the light chain. We also search the full sequence for each antibody and use the whole sequence as input. For cross-validation, we split the dataset by antibody sequences to ensure that no antibody sequences overlap between 90% training and 10% validation. B Cell Analysis We formulate a 6-category classification task for B cell maturation analysis, which includes {immature, transitional, mature, memory IgD+, memory IgD-, plasmacytes,}. The analysis of B cell maturation plays an important role in understanding the mechanisms underlying B cell responses in the immune system Ghraichy et al. (2021); Meffre et al. (2000). The order of B cell type follows the evolutionary process in the immune system, from an immature state to a transitional state, and finally becomes a memory B cell. Both memory IgD- and IgD+ belong to memory B cells with different isotypes, and they have a high affinity to foreign antigens. Among the other categories, the Plasmacytes PC sequences also have some affinity ability. It is widely reported that changes in antibody sequence patterns correlate with B-cell maturation. Therefore, we use this task to evaluate the representation learning capacity of the language model. We collect 88,094 sequences from Mroczek et al. (2014). They extracted from the peripheral blood of healthy adults and got six types of B cells with different maturity and antibody sequences. The distribution of various types of B cells in the dataset is shown in Table 6 Antibody Discovery Antibody discovery from B cell repertoire has been widely recognized as a novel trend to improve the efficiency of antibody discovery for diverse antigens (Weiner, 2015; Pedrioli & Oxenius, 2021). However, previous studies highly rely on expensive wet-lab experiments (Cao et al., 2020; Shiakolas et al., 2022). Deep learning-based methods have shown the potential capacity to help antibody discovery by reducing cost and increasing efficiency (Widrich et al., 2020; Wang et al., 2022). Here, we ask whether pre-trained models can benefit real-world problems and enable fast-track neutralization of SARS-CoV-2 antibody discovery. In the first step, we develop a sequence classifier to distinguish which antibody sequence from the numerous sequences is responsible for the recognition of SARS-CoV-2. This task is highly challenging since we can hardly get the sequence-level disease label that indicates whether the antibody sequence is related to the disease. Thus, we follow the practice of Roskin et al. (2020); Zaslavsky et al. (2022) to use the individual label as the rough sequence label and train a sequencelevel predictor. Then, with the help of a sequence-level predictor, we can give each sequence a most likely label to help antibody discovery, whose accuracy has been verified by the excellent results on individual prediction, which may accelerate the discovery of new antibody sequences. We follow the condition of Kim et al. (2021) to filter SARS-CoV-2 antibody data from the OAS database. The basic condition is ‘Chain = heavy; Isotype = IGHG; BSource = PBMC; Species = human; Vaccine = None’. We further add the condition of ‘Unique Sequences >= 10000’. For health/SARS we set the ‘Disease’ field to ‘None’, ‘SARS-CoV-2’. Then we obtain 87/133 patient profiles for each type. To make a balanced dataset, we limit the size of the health profile and mix up the healthy ones and the ones with the SARS-CoV-2. For cross-validation, we randomly split the dataset by profiles 10 times: 90% for training and 10% for validation. We further select sequences with top100 redundancy to make the positive labels more accurate. A.4 QUANTITATIVE ANALYSIS OF ATUE TASK SPECIFICITY It is important to include statistical significance tests relative to the antibody-specific features in antibody functional tasks we proposed in the ATUE benchmark. According to the evolution process shown in Figure 7, antibody evolution specificity distinct from that of proteins can be defined with two main features: (i) ancestor germlines; (ii) the mutated amino acids of germlines. We implemented statistical significance tests of (i) ancestor germlines subtype usage; (ii) the number of mutated amino acids in antibodies against different labels of downstream tasks in ATUE to quantitatively assess the "Task specificity". The analysis is now summarized in Table 7. Generally, it is clearly shown that ATUE benchmark comprises antibody tasks showing different scales of antibody specificity for later modeling analysis. Moreover, they are used for statistical analysis of task specificity and pre-training model objectives in our study. Antigen Binding In the Antigen binding dataset, both antibody binding and none antigen-binding sequences share the same germline subtype sequence (IGHV3.1) (Figure 8A) as well as the same number of germline mutations 8B). Therefore, None of two antibody-specific features show significant distribution differences between data with different labels, demonstrating antigen binding is a task with low antibody specificity. Paratope Prediction For the paratope prediction task, we first evaluate the germline subtype distribution difference between sequences with different numbers of binding sites (Figure 9A). Kruskal Wallis test showed a p-value of 0.296 suggesting germline subtype usage is not statistically significant. Also, we find the binding sites can be significantly mapped with more germline mutations than the non-binding sites, which is consistent with the knowledge of antibody specificity definition (Figure 9B). One out of two antibody-specific features shows significant distribution differences between data with different labels. Therefore, we define this task as a medium specificity task. B Cell Analysis As shown in Figure 10, the distribution of the germline usage as well as the number of germline mutations are significantly different between antibodies in B cells with different developmental stages. This observation is highly consistent with previous studies Mroczek et al. (2014); Ghraichy et al. (2021). Since both of the antibody-specific features show significant distribution differences, this task is defined as a high-specificity task. SARS Antibody Discovery Antibodies in SARS patients and healthy ones show a significant difference in their germline subtype usage and the number of germline mutations (Figure 11). This observation is highly consistent with previous studies showing SARS antibody convergent among patients Galson et al. (2020). Since both of the antibody-specific features are highly significant, this task is defined as a high-specificity task. A.5 MODEL TRAINING DETAILS Antibody can be represented as A = {a1, a2, · · · , am} and the germline of individual antibody can be represented as G = {g1, g2, · · · , gn}, where m and n are the lengths. Each token ai or gj in the sequence is called a residue that belongs to the amino acid set A. A includes 20 common amino acids with a residue ‘X’ that indicates the residue is unknown (mostly in the germline). Typically, antibody PLMs are trained with basic mask language modeling objective lMLM on the antibody sequences S = A = {a1, · · · , am, }. A.5.1 EVOLUTION-AWARE PRETRAINING In order to incorporate the evolutionary information into the pre-training, we pair the antibody sequence A with its germline G and concatenate them into a long sequence with a special token ‘[SEP]’ as the delimiter: S = {s1, · · · , sm+n+1} = {a1, · · · , am, [SEP], g1, · · · , gn}. Thus, we optimize the MLM objective on the long sequence S: lMLM = − 1 |M | ∑ i∈M log p(si|S\M ), (1) where M is the index set of masked tokens. It helps the model learn the basic residue distribution for antibody sequences. Besides, it can also capture the interaction between residues of the antibody and its germline. Ancestor Germline Prediction The ancestor relationship between the antibody and its germline determines the shared biological functions obtained in the evolution. Antibody sequences with similar residues evolved from different germline sequences may have different biological functions. When stimulated by a foreign antigen, the common ancestor germline evolves to various antibody sequences. Similar antibody sequences may have different germline sequences, which will affect their biological functions. Thus, the aim of this task is to determine whether the antibody has an evolutionary relationship with the given germline. During training, we substitute the paired germline G with random germline G′ = {g1, · · · , gn} in the batch via a probability p = 0.3. The new sequence is denoted as S′ = {a1, · · · , am, [SEP], g′1, · · · , g′n} and the training loss can be described as: la = − log p(y|S′), (2) where y ∈ {0, 1} indicate whether the noisy germline G′ is the ancestor of the antibody S. It can help the model to distinguish the ancestor germline of the antibody by capturing the shared features. Mutation Position Prediction The somatic hypermutations on the germline further give progeny antibodies the specificity of binding with the specific antigen. In order to model this specificity, this task focuses on predicting the mutation positions and the residues mutated. Specifically, for each token gj in the germline G, the target is to predict a label yj ∈ {0, 1} to indicate whether this token has been mutated. For the antibody sequence S, we mask the mutation position and predict these tokens. The objective can be formalized as: lm = − 1 n ∑ j∈{1,··· ,n} log p(yj |S\M ′)− 1 |M | ∑ i∈M ′ log p(ai|S\M ′). (3) Here, M ′ is the ground-truth mutation position and we mask these tokens on the antibody sequence. This task is more difficult than MLM which equally masks tokens in the L, because the tokens on the mutation position of A get less information from the germline, compared with other shared residues between the antibody and the germline. By optimizing this objective, the model learns to capture the specificity obtained from the somatic hypermutation in the evolutionary process. A.5.2 IMPLEMENTATION DETAILS We use the base Transformer architecture (Vaswani et al., 2017) with 12 layers, 12 heads, and 768 hidden states. The total parameters are 86M. We use Adam optimizer (Kingma & Ba, 2015) with a maximum learning rate of 2e-4 and a warm-up step of 24,000. The maximum length is set to 400 since most antibody sequences are shorter than 180. We first pre-train our model with the MLM objective. During the pre-training, 15% tokens are randomly selected with 80% masked, 10% replaced, and 10% kept. Then we conduct further pre-training on two antibody-related tasks with a smaller learning rate of 1e-5. For each task in ATUE, we finetune the model with supervised data. We follow the standard split of Antigen Binding Prediction. For other tasks that do not provide a standard split, we conduct 10-cross validation and report the average results. Since our pre-training model learns the representation of the antibody sequence, we expand the CDR fragment to the full antibody by searching the biological database for therapeutic antibody engineering tasks. For finetuning, we limit the max epochs to 30 and use the Adam optimizer with a max learning rate of 3e-5. We use the mean representation of 12 layers as the sequence representation. The model is trained with 108,000 steps and gets a 0.9606 token accuracy on the MLM task. It takes further steps for AGP and MPP. The model quickly converges for AGP and gets a 0.99 accuracy on the ancestor germline prediction because more than 80% residues are shared between the antibody and its germline. For MPP, it can predict the mutation position with the accuracy of 1.000 and obtains a 0.442 accuracy in the mutation position (EATLM w/o AGP). It means that the model can easily find the mutation positions by the self-attention between the antibody and germline, but it is still difficult to predict which residues this position will mutate to. We assume it is because the ancestor germline can undergo different somatic hypermutations and get various progeny antibodies, resulting in different valid mutations at the same position. We also compare this mutation accuracy with the model without MPP, which is only trained with MLM on the concatenation of the antibody and its germline. With a high prediction accuracy of 0.889 on all positions, it achieves only a 0.031 accuracy on the mutations. It implies that the masking among all positions on the sequence can do accurate predictions of the shared residues but hardly capture the mutation information. We also conduct AGP and MPP to finetune the baseline model AntiBERT. The pre-training results are shown in Table 8. We can find that without the concatenation of the antibody and its germline, it is difficult to predict the ancestor relationship. It also underperforms than EATLM in MPP. Negative sampling ratio We have tried the ratio of 0.1/0.3/0.5/0.75 and found that this ratio has little influence on performance and convergence speed. As we discussed above, the model can quickly converge for AGP and get an accuracy of 0.99. Finetuned Protein Language Models and Larger Architecture We pre-train our method with a larger architecture and compare it with ESM-1b, which also has 650M parameters. We also further pre-trained the ESMs to transfer to the antibody field. After that, we evaluate them on the antigen binding and paratope prediction task. The results are shown in Table 9. The result shows that the larger architecture does show an advantage in terms of performance improvement. For antigen binding, ESM-1b has better performance than ESM-1. However, in paratope prediction, it performs worse. In addition, for ESM, fine-tuning the antibody dataset may cause the overfitting problem, leading to a decrease in the performance of all three tasks. A.6 LIMITATION ABOUT EATLM First, EATLM doesn’t use any 3D structure information during pre-training. As a special subgroup of proteins, antibody structures provide much more information such as geometry than sequences. In the future, recruiting structure information for antibody pre-training may be able to improve the results. However, the data scale available for antibody structure is dramatically less than that of antibody sequences. The largest dataset of antibody structures only contains thousands of 3D high-resolution structures, while the number of antibody sequences is in billions. Using structure prediction methods like AlphaFold may help to bridge the gap between sequences and structures. Second, EATLM requires germline as input during downstream tasks, this will slow down the prediction speed. A.7 NEW SARS BINDER DISCOVERY The main challenge for disease diagnosis is to distinguish the disease-related antibodies from millions of antibody sequences in the individual profile, as stated in Section A.3. Here, with the help of a sequence-level predictor, we can give each sequence a most likely label to help antibody discovery, whose accuracy has been verified by the excellent results on individual prediction, which may accelerate the discovery of new antibody sequences. SARS Sequence-level Predictor We first train a sequence-level predictor for SARS-CoV-2. The results are shown in Table 10. Compared with Figure 4 in the main text, we find that good results in the sequence-level predictor do not necessarily mean good results in the antibody discovery. It can be mainly affected by the noisy label of the sequence level. Figure out SARS Binders As shown in Table 3 in the main body, we find 2 true SARS binders and 9 potential binders with the help of EATLM. Specifically, we first use our sequence-level predictor to get a probability score for each sequence in the SARS dataset. Then we select the sequence with a high-ranked score (the probability > 0.5) and compare them with the public Cov-AbDab database Raybould et al. (2021) 1, which contains data on published/patented antibodies known to bind to SARS-CoV-2 (Raybould et al., 2021). Since the CDR3 fragment in the heavy chain is the most relevant to the binding of antibody and antigen, we calculate the edit distance between the CDR3 fragments in heavy chains (CDR-H3) with those of the known binder and use a threshold of 85% similarity as the sequence identity. 85% Hamming distance for B cell antibody sequence clustering (identify similar B cell antibody sequences responding to the same antigen/epitope) was previously suggested in this paper (Gupta et al., 2017). This method then was widely used for B cell antibody repertoire analysis in different studies (Montague et al., 2021; Wang et al., 2022). SARS Binder Analysis To provide a more intuitive analysis of the similarity between our predicted antibody and true SARS-CoV-2 binders, we investigate the 3D structure of the true binding antibodies and the mutation site of our predicted sequence on the corresponding structure. High resolution structure of true binding antibody #3 in Table 3 with SARS-CoV-2 are shown in Figure 13 (PDB code: 7N62). The interaction interface between the antibodies and SARS-CoV-2 spike/RBD is shown in Figure 3 in the main body. CDR-H3 were shown in orange. Only one single residue highlighted in red is different between the predicted binder and the true binder. Obviously, these different residues don’t localize to the direct binding site and CDR-H3 founding core, suggesting the sequence difference likely will not affect antibody-virus interaction. Furthermore, we found the epitopes of the 11 identified SARS-CoV-2 antibodies cover a wide range of different structures from traditional RBD domain to novel non-RBD epitopes like S2 and NTD as shown in Table 3. This result shows our method enables diverse-epitope antibody discovery. Threshold Total Hit Hit rate (%) Probability Threshold Sensitivity In order to investigate the influence of the threshold used to determine the potential binders, we try different thresholds in Table 11. Here, the probability threshold means that if the sequence predictor gives a probability higher than the threshold for one sequence, it will be viewed as a potential binder. If the predicted binder has a sequence similarity higher than 1http://opig.stats.ox.ac.uk/webapps/covabdab/ 85% with the existing binders in Cov-AbDab, we view it as one hit. As the threshold score increases, the hit rate corresponding increases from 0.528% to 0.562%, indicating that our model may enable priority selection of SARS-CoV-2 antibodies and reduce experimental costs. Sequence Similarity Sensitivity In previous work, two antibodies with CDR-H3 similarity over 85% can be viewed as similar and have a high probability to share the same functionality. And here we also check the influence on the binder matching of different thresholds of similarity. The results are shown in Figure 14. Here, we fix the probability threshold as 0.5. As we can see, the baselines have similar trends in all thresholds. If we relax the threshold, there will be more matching sequences. However, the predictors will have less advantage over the random order, which indicates that the ranking is less important if we relax the similarity threshold. The Potential of New Binder Discovery During the training of our sequence-level predictor, we have no reliable ground-truth labels, which means that the model has never known which sequences can bind to SARS in a real-world scenario. However, the model can learn from the noisy data and rank the real SARS binders with high probabilities. Sequence identity of 1 means that the CDR-H3 fragment can be directly found in the Cov-AbDab database, which implies that the sequences have been verified by wet laboratory testing. The other sequences with an identity over 90% are thought to have a similar binding performance to existing binders, indicating that they are promising SARS binders that can help the discovery of therapeutic antibodies for SARS-CoV-2. A.8 EXTENTED STUDY FOR DISEASE DIAGNOSIS It would be interesting to see whether our sequence classifier can be used for other applications, such as disease diagnosis. Each human is estimated to maintain about 108 − 1010 distinct antibody sequences, constructing an informative encyclopedia recording the past and present health and disease. Interpreting the pattern of the sequences has already proved useful in disease diagnosis and allows us to assess many infectious diseases without expensive laboratory testing. However, it is difficult to distinguish which antibody sequence from the numerous sequences is responsible for the recognition of the specific antigen, which hinders the discovery of the antibody for diseases (Zaslavsky et al., 2022; Lu et al., 2018; Greiff et al., 2020). Benefiting from the recent high-throughput sequencing, we can obtain millions of antibody sequences from the individual human. At the same time, we can get a disease label that indicates whether the human is infected by the disease. The main challenge is that we can hardly get the sequence-level disease label that indicates whether the antibody sequence is related to the disease. Thus, we follow the practice of Roskin et al. (2020) to use the individual label as the rough sequence label and train a sequence-level predictor. Then we use this predictor to predict sequences of the individual profile and make the trimmed mean score as the individual score. We use the same data processing as Antibody Discovery stated in Section A.3. For health/SARS/HIV/Ebola/Allergy/SLE/MS, we set the ‘Disease’ field to ‘None’, ‘Ebola’, ‘Allgery’, ‘SLE’,‘MS’. Then we obtain 87/133/51/14/12/8/8 patient profiles for each type. We also do 10-cross validation and select sequences with high redundancy. Disease Classification We use all these disease profiles to build the Q7 classification task for disease diagnosis. Previous biological studies mainly use this multi-classification task for disease diagnosis Zaslavsky et al. (2022); Wang et al. (2022), highlighting the discriminatory power among different diseases is important for disease diagnosis. The results are shown in Table 12. We found both PPLM and PALM show comparable results as the randomly initialized model, suggesting the finetuning part plays a more important role and the pre-trained language model cannot help this task. Sequence-level Predictor for Various Disease As before, we train a sequence-level predictor for each disease. The results are shown in Table 13. Compared with Table 4 in the main text, we find that good results in the sequence-level predictor do not necessarily mean good results in the individual-level predictor. It is mainly due to the trimmed mean we use to get individual-level results, which is a central estimate that is robust to noise labels. Overall, our model has comparable results to other models in terms of sequence prediction with noisy labels and has better results for individual diagnosis. Individual-level Predictor for Various Disease It is observed our evolution-aware EATLM performs the best in the individual-level classifier to determine whether the patient suffering from SARS. Besides, PALMs significantly outperform PPLMs. The results are shown in Table 14.
1. What is the focus and contribution of the paper regarding antibody-specific language models? 2. What are the strengths of the proposed approach, particularly in its application, writing quality, and novelty? 3. What are the weaknesses of the paper, especially regarding the benchmark tasks and their small datasets? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper introduces a first-of-its-kind suite of benchmarking tasks for antibody-specific language models and provides some interesting observations about the behavior of general protein models and antibody-specific models on these tasks. It also introduces a new antibody-specific pretraining objective based on the unique evolutionary process of antibodies. Strengths And Weaknesses =strengths= Important application Well written Provides a first-of-its kind set of benchmarks for antibody ML Contributes a new interesting antibody-specific LM model =Weaknesses= Some of the tasks in the benchmark are based on small datasets, such that reliably computing differences between ML systems may be difficult. The covid-19 antibody discovery experiments seem to be a bit forced (see below). Clarity, Quality, Novelty And Reproducibility I really appreciated how the paper was written. It provides lots of basic background information on antibodies and discusses these complex topics well. I also really appreciated how much of the exposition was structured in terms of whether tasks are antibody-specific or more general to proteins. In general, I am a big supporter of papers that contribute new benchmarking setups. These can be used to drive methods research for years. This paper appears to be the first setup for antibody-specific benchmarking.
ICLR
Title On Pre-training Language Model for Antibody Abstract Antibodies are vital proteins offering robust protection for the human body from pathogens. The development of general protein and antibody-specific pre-trained language models both facilitate antibody prediction tasks. However, there have been limited studies that comprehensively explore the representation capability of distinct pre-trained language models on different antibody tasks. To investigate the problem, we aim to answer several key questions in this paper, such as how pre-trained language models perform in antibody tasks with different specificity and how introducing specific biological mechanisms to the pre-training process can benefit the model. Additionally, we evaluate if the learned antibody pre-trained representations can be applied to real-world antibody problems, like drug discovery and immune process understanding. Previously, no benchmark available largely hindered the study to answer these questions. To aid in our investigation, we provide an AnTibody Understanding Evaluation (ATUE) benchmark. We comprehensively evaluate the performance of protein pre-trained language models by empirical study along with conclusions and new insights. Our ATUE and code are released at https://github.com/dqwang122/EATLM. 1 INTRODUCTION Antibodies are a type of protein that is useful for diagnosing and treating a variety of diseases, including SARS-CoV-2 (Zhu et al., 2022). It is crucial to understand the information contained in antibody sequences to develop effective therapeutic antibodies and advance our understanding of the immune system (Greiff et al., 2020; Lu et al., 2018; Yermanos et al., 2018). Recent advances in general Pre-trained Protein Language Models (PPLM) and specific Pre-trained Antibody Language Models (PALM) offer new possibilities for antibody-related tasks. For example, PPLMs have shown promising results in transferring learned representations to antibody tasks (Kim et al., 2021; Zaslavsky et al., 2022) and PALMs have been found to improve model performance in antibody paratope predictions (Leem et al., 2022). Despite these successes, few studies have thoroughly examined the capability of different pre-trained language models (e.g. general PPLMs and specific PALMs) on various antibody tasks, which hinders the development of better architectures for antibody discovery and modification. To investigate this problem, we compared the performance of the pre-trained protein language model ESM (Rives et al., 2021), the pre-trained antibody language model AntiBERT (Leem et al., 2021), a pre-trained antibody language model EATLM by introducing antibody specific mechanisms, and a model trained from scratch (No Pretrain) on three antibody tasks with varying levels of specificity. The result is illustrated in Figure 1. Here, ∗Work was done when Danqing Wang was in Bytedance Research. specificity refers to the antibody’s unique evolution processes distinct from that of protein to obtain functionality, such as the ability to bind antigen (The definition is discussed in detail in §3.1). We can see that while ESM performs well in tasks that are less antibody specific, its performance decreases significantly in tasks that are more specific. Additionally, AntiBERT does not demonstrate a clear advantage over the non-pre-trained model in the high-specificity task. These results highlight the limitations of current pre-training language models for antibody-related studies. Using general PPLM representations directly may harm performance, and current pre-training strategies for PALMs may not fit the specific biological functions of antibodies. This emphasizes the need for a comprehensive model design guideline for various antibody tasks. Our main focus is to address the following questions: (I) How well will pre-trained language models perform on antibody tasks with varying specificity? Addressing of the question is mainly hindered by two challenges: the lack of a reliable antibodyspecific benchmark for performance evaluation and comprehensive studies of current PPLMs and PALMs. (II) Can incorporating biological mechanisms, specifically antibody-specific evolution, into the pre-training process provide additional benefits for antibody representation learning? This idea has been explored in several computational biology studies, which have demonstrated promising results in antibody-related tasks such as disease diagnosis and therapeutic antibody development (Yermanos et al., 2018; Miho et al., 2019). Then, it is interesting to know whether antibody representation learning can benefit from the incorporation of antibody-specific evolution information. (III) Are the pre-trained antibody representations useful in practical applications, such as drug discovery and immune process understanding? Antibodies are critical in drug development, and it is essential to determine whether pre-training representations can be beneficial for biologists to comprehend antibody functions or develop drugs. To investigate these questions, we first propose antibody study benchmark AnTibody Understanding Evaluation (ATUE). This is the first antibody benchmark with four real-world supervised tasks related to therapeutic antibody engineering, B cell analysis, and antibody discovery. These tasks cover a range of specificity levels to evaluate models on different aspects of antibody biological functions. Based on ATUE, we conduct empirical studies to investigate the representation ability of distinct pre-trained language models. To explore the impact of incorporating specific biological mechanisms in antibody pre-training, two objectives are introduced to tailor masked language modeling for evolution: (1) Ancestor germline prediction guides the model to discriminate the evolutionary relationship between antibody and ancestral sequences. (2) Mutation position prediction mimics hypermutation during the evolution. These methods are used to investigate the representation ability of antibody evolution-tailored language model. Finally, we take a close look at the SARS-CoV-2 antibody discovery to investigate the pre-trained representation under a real-world scenario. We have three main contributions in this study: • We created the first comprehensive antibody benchmark called ATUE to help with antibody application studies, which includes four real-world supervised tasks ranging from low to high specificity. We also introduce two new objectives for antibody pretraining that incorporate antibody-specific evolutionary information. • We made key observations for providing guidelines for better antibody representation. Firstly, PPLMs perform well on antibody tasks that have a high relationship with structure, but they perform poorly on tasks with high antibody specificity. Secondly, in most cases, PALMs perform as well as or even better than PPLMs with less pre-training data. Thirdly, PALMs can be improved by incorporating the evolution process, but the evolution information from MSAs does not always benefit antibody tasks. • We identified 11 potential SARS-CoV-2 binders that have highly identical sequences to existing therapeutic antibodies that bind to the virus, which could accelerate real-world antibody discovery. 2 RELATED WORK Our work focuses on researching the effectiveness of protein and pre-trained antibody language models for antibody-specific tasks. Below we review the representative existing methods. We list the details in Table 1. Pretrained Protein Language Models (PPLMs) There is an increasing interest in exploring largescale language models using protein sequences (Rao et al., 2019; Madani et al., 2020; Meier et al., 2021; Chen et al., 2022). These models have been shown to achieve state-of-art capacity in predicting protein structure and function. ProtTrans (Elnaggar et al., 2021) and ESM-1b (Rives et al., 2021) take individual protein sequences as input and adopt Transformer language models for pre-training, demonstrating that self-supervision is a promising paradigm for protein secondary structure, contact, homology predictions, and function prediction. To extract evolutionary information from protein sequences, Rao et al. (2021) proposed the MSA-transformer/MSA-1b model utilizing multiple sequence alignment (MSA) instead of a single query sequence as input. This model is superior to ESM1b for structure prediction, demonstrating evolution information can benefit protein representation learning. Despite the progress in the field, few studies reported their results on antibody tasks. Pretrained Antibody Language Models (PALMs) Encouraged by the success of PLMs in protein representation learning, series work seeks to learn antibody representations based on sequences of antibodies. AntiBERTy (Ruffolo et al., 2021) proposed the first antibody-specific language model, exploring a Transformer trained on 558M natural antibody sequences in the OAS database. Olsen et al. (2022b) train two language models for antibodies: A heavy chain version Ablang-H and a light chain version Ablang-L. The study reported transfer learning results on restoring missing residues of antibody sequences, which is a task similar to pre-training objectives. AntiBERTa (Leem et al., 2021) train the antibody language model on OAS and finetuning AntiBERTa for paratope position prediction, achieving state-of-the-art performance. Recently, Li et al. (2022) proposed an antibodyspecific language model and explored its performance in SARS-CoV-2 antigen binding, showing context-dependent representations of antibody sequences benefit binding prediction. 3 FRAMEWORK In this section, we first give a brief introduction to the antibody and its specific evolution. Then we propose the first antibody-specific benchmark (ATUE) composed of four tasks with different specificities. Finally, we implement several PPLMs and PALMs baselines and design an evolutionaware PALM to incorporate the biological mechanism into the pre-training process. 3.1 BACKGROUND Antibody Antibodies are vital proteins generated by the immune system to remove harmful foreign pathogens in the human body. they can specifically bind to antigens on the pathogen and recognize it. Antibodies are composed of two identical heavy chains and two identical light chains and form a large Y-shaped structure. Two tips on it contain highly variable loops, called Complementarity Determining Regions (CDR), which function for antigen binding. Antibody Specific Evolution Notably, the antibody evolution process is significantly different from that of proteins, providing a good opportunity for us to investigate the impact of general PPLMs on specific subdomains. To perform its protective function, the antibody sequence undergoes evolution selection to search for optimal patterns that can specifically recognize pathogens (Honjo & Habu, 1985). Deciphering the information stored in antibody sequences may benefit our understanding of disease and accelerate therapeutic antibody development (Greiff et al., 2020; Lu et al., 2018; Yermanos et al., 2018). During evolution, the random recombination of V/D/J-gene segments provides the Task Specificity HighLow initial diversity for the ancestor sequence (germline). Upon exposure to a pathogen, this sequence undergoes frequent sequence mutations to search for progeny antibody sequences with optimal binding specificity. In other words, gene recombination provides millions of germlines in the human body, and the germlines further mutate into a huge number of progeny antibodies. Thus, the ancestor relationship between an antibody and its corresponding germline as well as the mutation it undergoes together determine the unique biological functions. In brief, the evolutionary relationships between antibodies arise to gain new functions such as antigen binding. It is significantly different from that of proteins, which are to maintain certain functions across different organisms. We further illustrate this process in Figure 7 in §A.1. Unsupervised Antibody Corpus To obtain the evolutionary information of antibody sequences, we utilize Observed Antibody Space (OAS), a database containing more than 1.5 billion natural antibody sequences (Kovaltsuk et al., 2018; Olsen et al., 2022a) The antibody sequences in the database have been precisely annotated with evolutionary and structural information, including the paired germline and CDR3 for each antibody. To pair the antibody with its germline used in the pretraining task, we used the annotated sequences provided in the OAS database. Further information on data processing can be found in §A.2. 3.2 ANTIBODY UNDERSTANDING EVALUATION (ATUE) We provide four biologically relevant downstream prediction tasks to serve as antibody benchmarks, covering four major application aspects for antibodies in the real world: therapeutic antibody engineering, disease diagnostics, antibody discovery, and B cell maturation analysis. The antibody specificity of these tasks ranges from low to high, offering scaled tasks with subdomain specificity for pre-trained language model evaluation. Detailed information is listed in Figure 2. All data are publicly open and used under the right license. For each task, we focus on the following aspects and leave the details in Appendix (§A.3 and §A.4): [Definition] The formal definition of the task and the understanding ability required. [Impact] The importance of the task in the biological area. [Dataset] The data source and size. [Specificity] Antibody’s specific evolution characteristics are different from general proteins. We use several classification metrics to evaluate the performance. Accuracy (ACC) calculates the ratio of correct predictions. Matthews Correlation Coefficient (MCC) is the coefficient between true and predicted values. F1 is the average weighted score of precision and recall. AUC is the area under the ROC curve, which shows the performance at all classification thresholds. Antigen Binding Prediction is a binary sequence classification task to determine whether the CDR region of the antibody can bind to the specific antigen. [Impact] A better understanding of the binding affinity between antibodies and antigens can accelerate the affinity optimization of therapeutic antibodies. [Dataset] We collect the antigen binding data from Mason et al. (2021) and follow the training/validation/test split of 15,128/3,242/3,242. [Specificity] Low. All the antibodies sequence in the dataset are derived from a single germline sequence indicating the task is not antibody-specific evolution-related. Paratope Prediction It is to identify binding positions on the antibody sequence, which is a sequence labeling task to predict a 0/1 label for each residue of CDR fragments. [Impact] The exploration of paratope (binding positions between antibody and antigen) can help to understand the binding mechanisms of therapeutic antibodies. [Dataset] The paratope data is collected from Liberis et al. (2018) with 1,662 CDR segments on 277 antibodies. [Specificity] This task is medium specificity related because only partial antibodies from the database are derived from evolution. B Cell Maturation Analysis It is a 6-category classification task to distinguish the maturation stage of B cell antibody sequences. Each sequence belongs to one of {immature, transitional, mature, plasmacytes, memory IgD+, memory IgD-}. It requires the model to learn a representation sensitive to different maturation states. [Impact] It benefits the understanding of the mechanism during immune evolution, which is a critical biological process in the immune system affecting the function and antigen specificity of antibodies (Ghraichy et al., 2021; Meffre et al., 2000). [Dataset] We collect 88,094 sequences from Mroczek et al. (2014) with 6 maturation stages. [Specificity] High. Antibody evolution is highly coupled with B cell maturation (Meffre et al., 2000). Antibody Discovery The task is a binary sequence classification task to distinguish which antibody is directly responsible for SARS-CoV-2 binding. The task is highly challenging from two aspects: (1) Less than 1% of antibodies from SARS-CoV-2 patients are directly responsible for virus binding. (2) It is hard to get a reliable sequence-level classifier using unreliable and noisy individual-level labels. [Impact] Antibody discovery from B cell repertoire has been widely recognized as a important approach to accelerate antibody discovery for diverse antigens (Weiner, 2015; Pedrioli & Oxenius, 2021), and achieved great success for SARS-CoV-2 antibody discovery (Kovaltsuk et al., 2018; Cao et al., 2020; Shiakolas et al., 2022). [Dataset] We collected antibody sequences from 133 SARS-CoV-2 patients and 87 health persons from OAS and followed the processing pipeline of Kim et al. (2021). Inspired Zaslavsky et al. (2022), we match the high-ranked sequences with the sequences in the CoVAbDab (Raybould et al., 2021) database, which have been proved to bind SARS-CoV-2 using wet-lab experiments. [Specificity] High. It is widely reported antibodies derived from the same disease such as SARS-CoV-2 share strong convergent germline signals (Galson et al., 2020). 3.3 EXPERIMENT SETUP Based on the antibody benchmark ATUE, we evaluate the performance of current pertaining language models in different specificity tasks. Furthermore, to investigate the benefit of introducing the biological mechanism, we incorporate evolution information as the extra pretraining objectives for PALMs and propose EATLM. The detailed description of the objective and the implementation can be found in §A.5 Current Pre-trained language models Existing antibody and protein language models are summarized in Table 1. Since the code and pre-training data of AntiBERTa are not released, we train a BERT model named AntiBERT on the full OAS database following the same setting as the original study. MSA-1b (Rao et al., 2021) takes protein-specific evolutionary sequences (Multiple Sequence Alignment, MSA) as the input. Because it is hard to align sequences between antibodies due to the diversity of CDR3, we take the germline and create pseudo-MSAs with depth 2. We add a linear layer on top of the language models and finetune the whole model on the downstream tasks. Evolution-aware antibody pretraining method To incorporate the biological mechanism into the pre-training, we propose a model with evolution information: Antibody EvoluTion-aware pretraining Language Model. The antibody can be represented as A and the germline of the individual antibody can be represented as G. Typically, PALMs are trained with basic masked language modeling (MLM). Based on it, we design another two pre-training objectives to simulate the biological mechanism of antibody evolution. The evolutionary relationship between the antibody and its germline includes two folds: (i) Whether the antibody and the germline have an evolutionary relationship. (ii) How to mutate residues from the germline to get the specific antibody. Two evolution-related objectives are introduced to solve the above questions: Ancestor Germline Prediction (AGP) and Mutation Position Prediction (MPP). For ancestor germline prediction, we substitute the paired germline G with random germline G′ in the batch via a probability p. The model is made to distinguish the ancestor germline of the antibody by capturing the shared features. To predict mutation position, for each token in the germline G, the objective is to predict a 0/1 label for each token to indicate whether this token has been mutated. For the antibody sequence S, we mask the mutation position and predict these tokens. Hyper-parameters We use the base Transformer architecture (Vaswani et al., 2017) with 12 layers, 12 heads, and 768 hidden states. For each task in ATUE, we finetune the model with supervised data. We follow the standard split of Antigen Binding Prediction. For other tasks that do not provide a standard split, we use a 10-fold cross-validation. Since our pre-training model learns the representation of the antibody sequence, we expand the CDR fragment to the full antibody by searching the biological database for therapeutic antibody engineering tasks. We also use the same Transformer architecture to train from scratch for each downstream task. This model is indicated as non-pretrain since it is not pre-trained on a protein/antibody database. Reproduction We conduct 10-fold validation on paratope prediction, B cell maturation analysis, and antibody discovery. For antigen binding prediction, we conduct three repetitive experiments with different random seeds. We report the average results and the standard derivation. 4 RESULTS AND ANALYSIS In this section, we present the experimental results and analysis for the representation capability of existing PPLMs, PALMs, and the EATLM method with evolutionary incorporation, using ATUE benchmark. Additionally, we summarize our observations aiming to address the problems highlighted in the introduction. 4.1 MAIN RESULTS Antigen binding We evaluate the performance PLMs models for antibody binding and paratope prediction, which are less antibody specific. The results in Table 2 indicate that PPLMs and PALMs perform similarly on these tasks, suggesting that PALMs can learn comparable general protein representations to PPLMs. Among different PALMs, Ablang-H outperforms Ablang-L and AntiBERT. It indicates that separate training for heavy and light chain sequences is beneficial for these tasks. Moreover, the introduction of AGP and MPP provides improvement over AUC and F1 metrics. Paratope prediction The results presented in Table 2 demonstrate that for paratope prediction, both PPLMs and PALMs can significantly boost the prediction accuracy over the model with pretraining. However, PALMs do not exhibit a significant advantage over PPLMs. EATLM outperforms other models, particularly in terms of F1 and MCC, while other models exhibit high recall and low precision, indicating that they tend to predict more residues as binding sites. With the incorporation of mutation residue prediction, EATLM can focus on the specific mutated positions adapted to bind with antigen. Among the two PPLMs, MSA-1b outperforms ESM-1 on F1 and MCC, which benefits from the structure information learning from MSAs. B Cell Analysis In this task, we investigate the ability of different pre-trained language models to distinguish between various B cell mature states during evolution. The findings, as demonstrated in Table 2, indicate that PPLMs are not effective in discerning minor differences between B cell sequences, resulting in mediocre results. Both ESM-1 and MSA-1b perform significantly worse than randomly initialized models. MSA-1b, in particular, performs poorly among all pre-trained language models, implying that representations that excel in protein structure prediction may be detrimental to antibody-specific tasks. Conversely, all PALMs show promising results for the task. This may be due to the fact that the general protein has little correlation with the specific antibody mature process and cannot capture this feature during protein pretraining. Our EATLM significantly outperforms the other PALMs. This is because our model can effectively capture the evolution feature and better distinguish between B cells at different stages of maturation by explicitly modeling the biological mechanism. We conduct further analysis to figure out whether our EATLM successfully captures sequence characteristics during the evolutionary process. We explore the probabilities of predicting antibodies in class i to class j. The results shown in Figure 3 reveal EATLM can easily classify the immature B cell with an accuracy of 0.9. It is consistent with the biological study that CDR3 sequence length in immature B cells is significantly shorter than that of the other mature B cells (Ghraichy et al., 2021). From the diagonal, we can figure out that our model tends to mistake the B cell sequences with their previous or post-evolutionary stage, consistent with the biological process. Antibody Discovery We investigated the potential of PPLMs and PALMs in aiding the discovery of antigen-specific antibodies for real-world problems. To achieve this, we followed a two-step process similar to Zaslavsky et al. (2022). Firstly, we created a sequence classifier to differentiate SARS-CoV-2 antibodies using noisy individual-level labels. Secondly, we compared the highly-ranked sequences with true binding sequences in the CoV-AbDab (Raybould et al., 2021) database to determine if there are similarities. We used a 90% sequence identity threshold to determine the likelihood of biological functionality similar to the existing binders. The experimental design for this is outlined in §A.7. Figure 4 shows the cumulative sum of matched sequences in the order of predicted probabilities by different pre-trained language models for the SARSCoV-2 specific antibody discovery task. We can ob- serve that PALMs outperform PPLMs in identifying potential binders, as the sequences predicted with high probability by PALMs match better with the existing binders. Moreover, among PALMs, EATLM significantly outperforms other models, with the red line indicating its performance. Initially, EATLM is the quickest method to find potential binders, but it loses to Ablang-H, and eventually overtakes again and converges. This suggests that EATLM is the most effective method for identifying all potential binders in this dataset. Furthermore, we list 11 potential binder sequences discovered by EATLM in Table 3. Without supervised labels, EATLM gives a high probability of 2 SARS-CoV-2 existing binding antibodies. Besides, EATLM suggests 9 potential sequences with high CDR-H3 sequence identity, indicating the potential for diverse-epitope antibody discovery and selection. These results demonstrate the potential of EATLM in therapeutic antibody discovery. To validate whether the antibody sequences with 90% sequence identity can indeed bind the same target, we investigate the 3D structure of the true binding antibody. Table 4 shows only one single residue difference between the predicted binder and the existing binder, suggesting the predicted binders are highly possible to interact with SARS-CoV-2. No Predicted Binder Existing Binder Epitope Identity SARS‑CoV‑2 NTD 4.2 HOW DOES EVOLUTION PRETRAINING TASK INFLUENCE THE REPRESENTATION? To comprehend the reasons for the better performance of EATLM on antibody-related tasks, we conduct the analysis of the pre-trained representations. The objective of this analysis is to evaluate the effectiveness of the evolution-aware pre-training strategies from two perspectives: (1) Does the pre-trained representation of antibodies reflect their ancestor relationship? (2) Does the specificity of antibodies get captured by the evolution objective? Ancestor Gerlime Visualization We perform UMAP visualization analyses in Figure 5. First, we observe that antibodies evolved from the same germline are nicely clustered together (Figure 5a and 5b), indicating the learned embedding is encoded with germline information. Besides, sequences with similar scales of evolutionary distance tend to cluster together, and a clear gradation of evolutionary distance can be observed in Figure 5c and 5d. The visualization provides a sanity check for the ability of EATLM to extract the sequence information of antibodies. Accuracy of Mutation Position Based on the specific evolution process described in §3.1, we can find the mutation during the evolution process bring specificity to the antibody. Thus, we explore the model’s ability to predict mutated residue from the masked token, which can reflect the specificity feature the model captures. We find that although AntiBERT can predict with an accuracy of 0.889 on all positions, it fails on mutation positions with a 0.031 accuracy. In contrast, EATLM achieves an accuracy of 0.443 on mutation position, which indicates that the model captures the specificity information. Note that during the MPP training, we mask the mutation position on antibody sequences, which are different from its germline. Thus, the model cannot get the mutated residue from the germline directly. The only way is to learn the underlying mutation rules. The full results are shown in Table 8 in Appendix. 4.3 KEY OBSERVATIONS The performance of pre-trained language models is highly dependent on the specificity of the task. In tasks with low antibody-specificity, PPLMs perform similarly to PALMs, indicating that using general protein representations from PPLMs is an effective way to transfer learning in these tasks. On medium specificity tasks such as paratope prediction, PALMs show their advantage and outperform PPLMs. However, for tasks with high specificity, PPLMs have significantly lower performance, suggesting that general pre-trained protein models are insufficient for antibody-specific representation learning. Additionally, incorporating protein evolution information does not always benefit antibody tasks, especially those that require antibody evolution information, as shown by the 20% decrease in performance observed with MSA-1b compared to the model without pre-training. This finding is consistent with the biological understanding that the mechanism of antibody evolution is significantly different from that of proteins. Performa nce Incre ase (%) Task SpecificityLow Medium High 010‑10‑20 20 ‑30 No PretrainESM‑1 AntiBERTEATLM MSA‑1b Ablang‑HAblang‑L Figure 6: Performance summary of various pre-trained language models. Incorporation of biological evolution mechanism into PALM generally benefits antibody prediction tasks. The inclusion of evolution-related training objectives assists in identifying mutation positions on antibodies, which is a distinguishing feature from germline. Notably, the performance increase of EATLM in comparison to other PALMs is linked with the level of task specificity. The ablation study showed removing the evolution-related pretraining objectives leads to decreased performance, confirming their contribution to the prediction task. Further research in this direction is promising and could offer more in-depth insights. Antibody pre-trained representations are helpful for real-world drug discovery. By utilizing the language model, we predict the likelihood of each antibody binding with SARS-CoV-2. Despite lacking precise sequence-level labels, we successfully identify 11 promising antibody binders. 5 CONCLUSIONS AND LIMITATIONS In this paper, we conduct a detailed investigation into the effects of pre-trained protein and antibody language models on various antibody tasks. To facilitate research in the antibody and machine learning fields, we provide ATUE consisting of four important antibody tasks from four different biological categories with varying levels of antibody specificity. However, there are certain constraints to our research. Firstly, due to the scarcity of data, the diversity of tasks in our ATUE is limited. As more data becomes available, we anticipate expanding our benchmark to include a greater range of diseases and larger data sets. Additionally, we did not examine any 3D structure information during antibody pre-training. As antibody structures offer more information than just sequences, such as geometry, incorporating structural information in future studies may lead to improved results. ETHIC STATEMENT This research involving the use of pre-existing data and computational methods did not involve any human or animal subjects, and therefore, no ethical approval was required. The authors followed all applicable ethical standards and guidelines for data analysis and reporting. All data used in this study were obtained from publicly available sources, and proper citation and attribution have been given. The authors have made efforts to ensure that the research presented in this paper does not infringe upon any existing copyrights or intellectual property rights. ACKNOWLEDGEMENT We thank members of ByteDance Research for discussion, Zaixiang Zheng and Yi Zhou for useful writing suggestions. Hao Zhou is supported by Vanke Special Fund for Public Health and Health Discipline Development, Tsinghua University (NO.20221080053), Guoqiang Research Institute General Project, Tsinghua University (No. 2021GQG1012); A APPENDIX A.1 ANTIBODY SPECIFIC EVOLUTION Antibodies, composed of two identical heavy chains and two identical light chains, form a large Y-shaped structure, where the two tips are responsible for pathogens binding. Antibody evolution, described by sequence-sequence relationships between ancestor and progeny antibodies, reflects antibodies’ key antigen-binding function (Honjo & Habu, 1985). During antibody evolution (Figure 7), the initial diversity is encoded into the ancestor sequence through randomly recombination of V-, D- and J-gene segments. Upon exposure to a pathogen, the sequence undergoes frequent sequence mutations to search for progeny sequences with optimal binding specificity. Sequence evolution analysis has been employed by many computational biology studies and shows promising results in antibody-related tasks, such as disease diagnosis and therapeutic antibody development (Yermanos et al., 2018; Miho et al., 2019). Importantly, antibody evolution is significantly different from that of proteins. Antibodies only contain hundreds of thousands ancestor sequences so-called germline. To bind dozens of millions of diverse antigens, antibodies need to mutate from the ancestor sequences to gain new functions (Figure 7). Therefore, the non-conserved amino acids (mutated ones) plays important roles for structure and function. On the contrary, the conserved amino acids (not mutated) in proteins determine structure and function. During protein evolution, evolutionary pressure to maintain protein structure and functions leads to the conservation or co-evolution of residues located in structural folding core for binding interface. Diverse methods have been developed to extract the co-evolution information from conserved amino acids sequences for structure and function prediction, such as AlphaFold (Jumper et al., 2021). In brief (Figure 7), antibody evolution specificity distinct from that of proteins can be defined with two main features: (i) ancestor germlines; (ii) the mutated amino acids of germlines. Protein evolution Antibody evolution A.2 DATA PROCESSING DETAILS Pairing Antibody with Germline For germline annotation in the pre-training task, we used the annotated germline sequences provided in the OAS database (Kovaltsuk et al., 2018). For downstream benchmarks tasks like B-cell classification, therapeutic antibody engineering, and disease diagnosis, we completely followed the methods shown in the OAS database paper. IgBLAST, an immunoinformatic benchmarking tool for the analysis of B-cell antibody repertoires was used for germline annotation (Ye et al., 2013). The antibody nucleotide-containing FASTA file was aligned to germline and translated to amino acids using IgBLASTn. The antibody amino-acid sequence was aligned using IgBLASTp. The germline databases for human patients used ImMunoGeneTics (IMGT) germline sequences derived from Lefranc et al. (1999). For each antibody, usually, multiple germline sequences can be obtained and only the single sequence showing the highest confidence score for the alignment was chosen. Pre-training Data Processing We downloaded OAS Oct 2021 version from its website and removed duplicate sequences. To avoid data leakage, we cluster sequences based on the CDR3 sequence and filter each cluster by 70% identity over the whole sequence using Linclust (Steinegger & Söding, 2018). Then, we shuffle the dataset and split it into 100k-size chunks. The last chunk is used as the validation set. The dataset size is 20,245,249 and 45,249 are used for validation. A.3 ATUE DETAILS We summarize the tasks used in ATUE in Table 5 and discuss each task in detail in this section. Antigen Binding Accurate antigen-binding prediction approaches could allow significantly more efficient antibody discovery with higher affinity. Machine learning methods have already achieved some success in antibody binding capacity optimization. We collect the antigen-binding data from Mason et al. (2021) and follow the training/validation/test split of 15,128/3,242/3,242. The original dataset only has CDR3 fragments, and we extend them to the full antibody sequences. For crossvalidation, we split the dataset by antibody sequences to ensure that no antibody sequences overlap between 90% training and 10% validation. Paratope Prediction Paratope is the antibody residues involved in antigen binding. The ability to accurately map the paratope can provide detailed knowledge about the binding mechanism and accelerate antibody discovery. 1D sequence-based deep learning methods have been employed for paratope prediction. The paratope data is collected from Liberis et al. (2018) with 1,662 CDR segments on 277 antibodies. Each antibody contains three CDR fragments (CDR1, CDR2 and CDR3) in the heavy chain and three CDR fragments in the light chain. We also search the full sequence for each antibody and use the whole sequence as input. For cross-validation, we split the dataset by antibody sequences to ensure that no antibody sequences overlap between 90% training and 10% validation. B Cell Analysis We formulate a 6-category classification task for B cell maturation analysis, which includes {immature, transitional, mature, memory IgD+, memory IgD-, plasmacytes,}. The analysis of B cell maturation plays an important role in understanding the mechanisms underlying B cell responses in the immune system Ghraichy et al. (2021); Meffre et al. (2000). The order of B cell type follows the evolutionary process in the immune system, from an immature state to a transitional state, and finally becomes a memory B cell. Both memory IgD- and IgD+ belong to memory B cells with different isotypes, and they have a high affinity to foreign antigens. Among the other categories, the Plasmacytes PC sequences also have some affinity ability. It is widely reported that changes in antibody sequence patterns correlate with B-cell maturation. Therefore, we use this task to evaluate the representation learning capacity of the language model. We collect 88,094 sequences from Mroczek et al. (2014). They extracted from the peripheral blood of healthy adults and got six types of B cells with different maturity and antibody sequences. The distribution of various types of B cells in the dataset is shown in Table 6 Antibody Discovery Antibody discovery from B cell repertoire has been widely recognized as a novel trend to improve the efficiency of antibody discovery for diverse antigens (Weiner, 2015; Pedrioli & Oxenius, 2021). However, previous studies highly rely on expensive wet-lab experiments (Cao et al., 2020; Shiakolas et al., 2022). Deep learning-based methods have shown the potential capacity to help antibody discovery by reducing cost and increasing efficiency (Widrich et al., 2020; Wang et al., 2022). Here, we ask whether pre-trained models can benefit real-world problems and enable fast-track neutralization of SARS-CoV-2 antibody discovery. In the first step, we develop a sequence classifier to distinguish which antibody sequence from the numerous sequences is responsible for the recognition of SARS-CoV-2. This task is highly challenging since we can hardly get the sequence-level disease label that indicates whether the antibody sequence is related to the disease. Thus, we follow the practice of Roskin et al. (2020); Zaslavsky et al. (2022) to use the individual label as the rough sequence label and train a sequencelevel predictor. Then, with the help of a sequence-level predictor, we can give each sequence a most likely label to help antibody discovery, whose accuracy has been verified by the excellent results on individual prediction, which may accelerate the discovery of new antibody sequences. We follow the condition of Kim et al. (2021) to filter SARS-CoV-2 antibody data from the OAS database. The basic condition is ‘Chain = heavy; Isotype = IGHG; BSource = PBMC; Species = human; Vaccine = None’. We further add the condition of ‘Unique Sequences >= 10000’. For health/SARS we set the ‘Disease’ field to ‘None’, ‘SARS-CoV-2’. Then we obtain 87/133 patient profiles for each type. To make a balanced dataset, we limit the size of the health profile and mix up the healthy ones and the ones with the SARS-CoV-2. For cross-validation, we randomly split the dataset by profiles 10 times: 90% for training and 10% for validation. We further select sequences with top100 redundancy to make the positive labels more accurate. A.4 QUANTITATIVE ANALYSIS OF ATUE TASK SPECIFICITY It is important to include statistical significance tests relative to the antibody-specific features in antibody functional tasks we proposed in the ATUE benchmark. According to the evolution process shown in Figure 7, antibody evolution specificity distinct from that of proteins can be defined with two main features: (i) ancestor germlines; (ii) the mutated amino acids of germlines. We implemented statistical significance tests of (i) ancestor germlines subtype usage; (ii) the number of mutated amino acids in antibodies against different labels of downstream tasks in ATUE to quantitatively assess the "Task specificity". The analysis is now summarized in Table 7. Generally, it is clearly shown that ATUE benchmark comprises antibody tasks showing different scales of antibody specificity for later modeling analysis. Moreover, they are used for statistical analysis of task specificity and pre-training model objectives in our study. Antigen Binding In the Antigen binding dataset, both antibody binding and none antigen-binding sequences share the same germline subtype sequence (IGHV3.1) (Figure 8A) as well as the same number of germline mutations 8B). Therefore, None of two antibody-specific features show significant distribution differences between data with different labels, demonstrating antigen binding is a task with low antibody specificity. Paratope Prediction For the paratope prediction task, we first evaluate the germline subtype distribution difference between sequences with different numbers of binding sites (Figure 9A). Kruskal Wallis test showed a p-value of 0.296 suggesting germline subtype usage is not statistically significant. Also, we find the binding sites can be significantly mapped with more germline mutations than the non-binding sites, which is consistent with the knowledge of antibody specificity definition (Figure 9B). One out of two antibody-specific features shows significant distribution differences between data with different labels. Therefore, we define this task as a medium specificity task. B Cell Analysis As shown in Figure 10, the distribution of the germline usage as well as the number of germline mutations are significantly different between antibodies in B cells with different developmental stages. This observation is highly consistent with previous studies Mroczek et al. (2014); Ghraichy et al. (2021). Since both of the antibody-specific features show significant distribution differences, this task is defined as a high-specificity task. SARS Antibody Discovery Antibodies in SARS patients and healthy ones show a significant difference in their germline subtype usage and the number of germline mutations (Figure 11). This observation is highly consistent with previous studies showing SARS antibody convergent among patients Galson et al. (2020). Since both of the antibody-specific features are highly significant, this task is defined as a high-specificity task. A.5 MODEL TRAINING DETAILS Antibody can be represented as A = {a1, a2, · · · , am} and the germline of individual antibody can be represented as G = {g1, g2, · · · , gn}, where m and n are the lengths. Each token ai or gj in the sequence is called a residue that belongs to the amino acid set A. A includes 20 common amino acids with a residue ‘X’ that indicates the residue is unknown (mostly in the germline). Typically, antibody PLMs are trained with basic mask language modeling objective lMLM on the antibody sequences S = A = {a1, · · · , am, }. A.5.1 EVOLUTION-AWARE PRETRAINING In order to incorporate the evolutionary information into the pre-training, we pair the antibody sequence A with its germline G and concatenate them into a long sequence with a special token ‘[SEP]’ as the delimiter: S = {s1, · · · , sm+n+1} = {a1, · · · , am, [SEP], g1, · · · , gn}. Thus, we optimize the MLM objective on the long sequence S: lMLM = − 1 |M | ∑ i∈M log p(si|S\M ), (1) where M is the index set of masked tokens. It helps the model learn the basic residue distribution for antibody sequences. Besides, it can also capture the interaction between residues of the antibody and its germline. Ancestor Germline Prediction The ancestor relationship between the antibody and its germline determines the shared biological functions obtained in the evolution. Antibody sequences with similar residues evolved from different germline sequences may have different biological functions. When stimulated by a foreign antigen, the common ancestor germline evolves to various antibody sequences. Similar antibody sequences may have different germline sequences, which will affect their biological functions. Thus, the aim of this task is to determine whether the antibody has an evolutionary relationship with the given germline. During training, we substitute the paired germline G with random germline G′ = {g1, · · · , gn} in the batch via a probability p = 0.3. The new sequence is denoted as S′ = {a1, · · · , am, [SEP], g′1, · · · , g′n} and the training loss can be described as: la = − log p(y|S′), (2) where y ∈ {0, 1} indicate whether the noisy germline G′ is the ancestor of the antibody S. It can help the model to distinguish the ancestor germline of the antibody by capturing the shared features. Mutation Position Prediction The somatic hypermutations on the germline further give progeny antibodies the specificity of binding with the specific antigen. In order to model this specificity, this task focuses on predicting the mutation positions and the residues mutated. Specifically, for each token gj in the germline G, the target is to predict a label yj ∈ {0, 1} to indicate whether this token has been mutated. For the antibody sequence S, we mask the mutation position and predict these tokens. The objective can be formalized as: lm = − 1 n ∑ j∈{1,··· ,n} log p(yj |S\M ′)− 1 |M | ∑ i∈M ′ log p(ai|S\M ′). (3) Here, M ′ is the ground-truth mutation position and we mask these tokens on the antibody sequence. This task is more difficult than MLM which equally masks tokens in the L, because the tokens on the mutation position of A get less information from the germline, compared with other shared residues between the antibody and the germline. By optimizing this objective, the model learns to capture the specificity obtained from the somatic hypermutation in the evolutionary process. A.5.2 IMPLEMENTATION DETAILS We use the base Transformer architecture (Vaswani et al., 2017) with 12 layers, 12 heads, and 768 hidden states. The total parameters are 86M. We use Adam optimizer (Kingma & Ba, 2015) with a maximum learning rate of 2e-4 and a warm-up step of 24,000. The maximum length is set to 400 since most antibody sequences are shorter than 180. We first pre-train our model with the MLM objective. During the pre-training, 15% tokens are randomly selected with 80% masked, 10% replaced, and 10% kept. Then we conduct further pre-training on two antibody-related tasks with a smaller learning rate of 1e-5. For each task in ATUE, we finetune the model with supervised data. We follow the standard split of Antigen Binding Prediction. For other tasks that do not provide a standard split, we conduct 10-cross validation and report the average results. Since our pre-training model learns the representation of the antibody sequence, we expand the CDR fragment to the full antibody by searching the biological database for therapeutic antibody engineering tasks. For finetuning, we limit the max epochs to 30 and use the Adam optimizer with a max learning rate of 3e-5. We use the mean representation of 12 layers as the sequence representation. The model is trained with 108,000 steps and gets a 0.9606 token accuracy on the MLM task. It takes further steps for AGP and MPP. The model quickly converges for AGP and gets a 0.99 accuracy on the ancestor germline prediction because more than 80% residues are shared between the antibody and its germline. For MPP, it can predict the mutation position with the accuracy of 1.000 and obtains a 0.442 accuracy in the mutation position (EATLM w/o AGP). It means that the model can easily find the mutation positions by the self-attention between the antibody and germline, but it is still difficult to predict which residues this position will mutate to. We assume it is because the ancestor germline can undergo different somatic hypermutations and get various progeny antibodies, resulting in different valid mutations at the same position. We also compare this mutation accuracy with the model without MPP, which is only trained with MLM on the concatenation of the antibody and its germline. With a high prediction accuracy of 0.889 on all positions, it achieves only a 0.031 accuracy on the mutations. It implies that the masking among all positions on the sequence can do accurate predictions of the shared residues but hardly capture the mutation information. We also conduct AGP and MPP to finetune the baseline model AntiBERT. The pre-training results are shown in Table 8. We can find that without the concatenation of the antibody and its germline, it is difficult to predict the ancestor relationship. It also underperforms than EATLM in MPP. Negative sampling ratio We have tried the ratio of 0.1/0.3/0.5/0.75 and found that this ratio has little influence on performance and convergence speed. As we discussed above, the model can quickly converge for AGP and get an accuracy of 0.99. Finetuned Protein Language Models and Larger Architecture We pre-train our method with a larger architecture and compare it with ESM-1b, which also has 650M parameters. We also further pre-trained the ESMs to transfer to the antibody field. After that, we evaluate them on the antigen binding and paratope prediction task. The results are shown in Table 9. The result shows that the larger architecture does show an advantage in terms of performance improvement. For antigen binding, ESM-1b has better performance than ESM-1. However, in paratope prediction, it performs worse. In addition, for ESM, fine-tuning the antibody dataset may cause the overfitting problem, leading to a decrease in the performance of all three tasks. A.6 LIMITATION ABOUT EATLM First, EATLM doesn’t use any 3D structure information during pre-training. As a special subgroup of proteins, antibody structures provide much more information such as geometry than sequences. In the future, recruiting structure information for antibody pre-training may be able to improve the results. However, the data scale available for antibody structure is dramatically less than that of antibody sequences. The largest dataset of antibody structures only contains thousands of 3D high-resolution structures, while the number of antibody sequences is in billions. Using structure prediction methods like AlphaFold may help to bridge the gap between sequences and structures. Second, EATLM requires germline as input during downstream tasks, this will slow down the prediction speed. A.7 NEW SARS BINDER DISCOVERY The main challenge for disease diagnosis is to distinguish the disease-related antibodies from millions of antibody sequences in the individual profile, as stated in Section A.3. Here, with the help of a sequence-level predictor, we can give each sequence a most likely label to help antibody discovery, whose accuracy has been verified by the excellent results on individual prediction, which may accelerate the discovery of new antibody sequences. SARS Sequence-level Predictor We first train a sequence-level predictor for SARS-CoV-2. The results are shown in Table 10. Compared with Figure 4 in the main text, we find that good results in the sequence-level predictor do not necessarily mean good results in the antibody discovery. It can be mainly affected by the noisy label of the sequence level. Figure out SARS Binders As shown in Table 3 in the main body, we find 2 true SARS binders and 9 potential binders with the help of EATLM. Specifically, we first use our sequence-level predictor to get a probability score for each sequence in the SARS dataset. Then we select the sequence with a high-ranked score (the probability > 0.5) and compare them with the public Cov-AbDab database Raybould et al. (2021) 1, which contains data on published/patented antibodies known to bind to SARS-CoV-2 (Raybould et al., 2021). Since the CDR3 fragment in the heavy chain is the most relevant to the binding of antibody and antigen, we calculate the edit distance between the CDR3 fragments in heavy chains (CDR-H3) with those of the known binder and use a threshold of 85% similarity as the sequence identity. 85% Hamming distance for B cell antibody sequence clustering (identify similar B cell antibody sequences responding to the same antigen/epitope) was previously suggested in this paper (Gupta et al., 2017). This method then was widely used for B cell antibody repertoire analysis in different studies (Montague et al., 2021; Wang et al., 2022). SARS Binder Analysis To provide a more intuitive analysis of the similarity between our predicted antibody and true SARS-CoV-2 binders, we investigate the 3D structure of the true binding antibodies and the mutation site of our predicted sequence on the corresponding structure. High resolution structure of true binding antibody #3 in Table 3 with SARS-CoV-2 are shown in Figure 13 (PDB code: 7N62). The interaction interface between the antibodies and SARS-CoV-2 spike/RBD is shown in Figure 3 in the main body. CDR-H3 were shown in orange. Only one single residue highlighted in red is different between the predicted binder and the true binder. Obviously, these different residues don’t localize to the direct binding site and CDR-H3 founding core, suggesting the sequence difference likely will not affect antibody-virus interaction. Furthermore, we found the epitopes of the 11 identified SARS-CoV-2 antibodies cover a wide range of different structures from traditional RBD domain to novel non-RBD epitopes like S2 and NTD as shown in Table 3. This result shows our method enables diverse-epitope antibody discovery. Threshold Total Hit Hit rate (%) Probability Threshold Sensitivity In order to investigate the influence of the threshold used to determine the potential binders, we try different thresholds in Table 11. Here, the probability threshold means that if the sequence predictor gives a probability higher than the threshold for one sequence, it will be viewed as a potential binder. If the predicted binder has a sequence similarity higher than 1http://opig.stats.ox.ac.uk/webapps/covabdab/ 85% with the existing binders in Cov-AbDab, we view it as one hit. As the threshold score increases, the hit rate corresponding increases from 0.528% to 0.562%, indicating that our model may enable priority selection of SARS-CoV-2 antibodies and reduce experimental costs. Sequence Similarity Sensitivity In previous work, two antibodies with CDR-H3 similarity over 85% can be viewed as similar and have a high probability to share the same functionality. And here we also check the influence on the binder matching of different thresholds of similarity. The results are shown in Figure 14. Here, we fix the probability threshold as 0.5. As we can see, the baselines have similar trends in all thresholds. If we relax the threshold, there will be more matching sequences. However, the predictors will have less advantage over the random order, which indicates that the ranking is less important if we relax the similarity threshold. The Potential of New Binder Discovery During the training of our sequence-level predictor, we have no reliable ground-truth labels, which means that the model has never known which sequences can bind to SARS in a real-world scenario. However, the model can learn from the noisy data and rank the real SARS binders with high probabilities. Sequence identity of 1 means that the CDR-H3 fragment can be directly found in the Cov-AbDab database, which implies that the sequences have been verified by wet laboratory testing. The other sequences with an identity over 90% are thought to have a similar binding performance to existing binders, indicating that they are promising SARS binders that can help the discovery of therapeutic antibodies for SARS-CoV-2. A.8 EXTENTED STUDY FOR DISEASE DIAGNOSIS It would be interesting to see whether our sequence classifier can be used for other applications, such as disease diagnosis. Each human is estimated to maintain about 108 − 1010 distinct antibody sequences, constructing an informative encyclopedia recording the past and present health and disease. Interpreting the pattern of the sequences has already proved useful in disease diagnosis and allows us to assess many infectious diseases without expensive laboratory testing. However, it is difficult to distinguish which antibody sequence from the numerous sequences is responsible for the recognition of the specific antigen, which hinders the discovery of the antibody for diseases (Zaslavsky et al., 2022; Lu et al., 2018; Greiff et al., 2020). Benefiting from the recent high-throughput sequencing, we can obtain millions of antibody sequences from the individual human. At the same time, we can get a disease label that indicates whether the human is infected by the disease. The main challenge is that we can hardly get the sequence-level disease label that indicates whether the antibody sequence is related to the disease. Thus, we follow the practice of Roskin et al. (2020) to use the individual label as the rough sequence label and train a sequence-level predictor. Then we use this predictor to predict sequences of the individual profile and make the trimmed mean score as the individual score. We use the same data processing as Antibody Discovery stated in Section A.3. For health/SARS/HIV/Ebola/Allergy/SLE/MS, we set the ‘Disease’ field to ‘None’, ‘Ebola’, ‘Allgery’, ‘SLE’,‘MS’. Then we obtain 87/133/51/14/12/8/8 patient profiles for each type. We also do 10-cross validation and select sequences with high redundancy. Disease Classification We use all these disease profiles to build the Q7 classification task for disease diagnosis. Previous biological studies mainly use this multi-classification task for disease diagnosis Zaslavsky et al. (2022); Wang et al. (2022), highlighting the discriminatory power among different diseases is important for disease diagnosis. The results are shown in Table 12. We found both PPLM and PALM show comparable results as the randomly initialized model, suggesting the finetuning part plays a more important role and the pre-trained language model cannot help this task. Sequence-level Predictor for Various Disease As before, we train a sequence-level predictor for each disease. The results are shown in Table 13. Compared with Table 4 in the main text, we find that good results in the sequence-level predictor do not necessarily mean good results in the individual-level predictor. It is mainly due to the trimmed mean we use to get individual-level results, which is a central estimate that is robust to noise labels. Overall, our model has comparable results to other models in terms of sequence prediction with noisy labels and has better results for individual diagnosis. Individual-level Predictor for Various Disease It is observed our evolution-aware EATLM performs the best in the individual-level classifier to determine whether the patient suffering from SARS. Besides, PALMs significantly outperform PPLMs. The results are shown in Table 14.
1. What are the strengths and weaknesses of the paper regarding its contribution to antibody prediction benchmark tasks and loss functions? 2. What are the concerns regarding the clarity, quality, novelty, and reproducibility of the paper's content? 3. Are there any questions about the formal description of the evolution-aware antibody pretraining method? 4. How does the reviewer assess the splitting of datasets into train/test/eval splits? 5. Why is it important to cite Li et al.'s work in the related work section? 6. Do you have any questions regarding the figures and their captions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper describes 1) five antibody prediction benchmark tasks, and 2) two loss functions for pre-training antibody language proteins to incorporate the evolutionary relationship of antibodies during pre-training. Strengths And Weaknesses Strengths I am not aware of an existing benchmark specifically for antibodies The described loss functions for incorporating the evolutionary relationship of antibodies during pre-training is interesting and new as far as I know Weaknesses The paper is not written clearly enough. The lack of technical details, unclear definitions such as "Task specificity", and spelling errors make it hard to understand the paper. Performance improvements are overall small The benchmark contains only five tasks, train/test splits are not justified, and it is unclear if it will be open-sourced. It also does not allow splitting datasets in alternative ways, e.g. by varying the size of the training set or distance to a wildtype sequence. The definition of "task specificity" is unclear and needs to be assessed quantitatively. As a consequence, the conclusion that the proposed loss functions improve performance most on the "most specific" tasks is vague. Please describe the "Evolution-aware antibody pretraining method" more formally by using equations. Phrases such as "The model is made to distinguish the ancestor germline of the antibody by capturing the shared features" are insufficient for understanding the necessary technical details to reimplement the loss function. Please correct spelling and grammatical errors throughout the paper. Please describe how and which hyper-parameters of the proposed model and baseline models were tuned? Please describe how models were fine-tuned and if they were all fine-tuned in the same way. Please compare the number of parameters of baseline models and EATLM (w/o AGP, w/o MPP, AGP & MPP) in table 1. Performance improvements can be due to different numbers of parameters rather than differences in the loss function. Please justify how datasets were split into train/test/eval splits. Sequences of the train and test set can be very similar if, e.g., datasets are split randomly. What does "training/validation/test split of 15,128/3,242/3,242", for example, mean? The benchmark lacks regression tasks to assess the performance of, e.g., continuous binding affinities (10.48550/arXiv.2210.02881). Please cite Li et al (10.48550/arXiv.2210.02881) in the related work section, who recently proposed an antibody benchmark with two tasks. Please describe whether benchmark datasets and baseline models will be open-sourced Table 2: Please separate metrics of different tasks by vertical lines. It is hard to follow which metrics belong to which tasks. Figure 3: The caption is unclear. Does it show a confusion matrix of model predictions vs. ground-truth labels? The performance of which model is shown? How do per-class performances vary across models? Which class is hardest to predict? Figure 4: Also quantify performances by reporting the AUC and alternative ranking metrics such as Spearman's R or NDCG score. Clarity, Quality, Novelty And Reproducibility The paper is not written clearly enough, lacks technical details, and contains many spelling error. It is unclear if the proposed benchmark and methods will be open-sourced.
ICLR
Title MSFM: Multi-Scale Fusion Module for Object Detection Abstract Feature fusion is beneficial to object detection tasks in two folds. On one hand, detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, objects can be detected in different scales, which improves the robustness of the framework. In this work, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model achieves a 45.7% mAP on test-dev set. Code will be available. N/A Feature fusion is beneficial to object detection tasks in two folds. On one hand, detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, objects can be detected in different scales, which improves the robustness of the framework. In this work, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model achieves a 45.7% mAP on test-dev set. Code will be available. 1 INTRODUCTION Object detection is one of the fundamental tasks in computer vision. It requires the detector to localize the objects in the image using bounding boxes and assign the correct category to each of them. In recent years, deep convolutional neural networks (CNNs) have seen great success in object detection, which can be divided into two categories: two-stage detectors, e.g., Faster R-CNN (Ren et al., 2015), and one-stage detectors, e.g., SSD (Liu et al., 2016). Two-stage detectors have high localization and recognition accuracy, while one-stage detectors achieve high inference speed (Jiao et al., 2019). A typical two-stage detector consists of a backbone, a neck, a Region Proposal Network (RPN), and a Region of Interest (ROI) head (Chen et al., 2019). A backbone is a feature extractor usually pre-trained on ImageNet dataset (Deng et al., 2009). A neck could be a Feature Pyramid Network (FPN) (Lin et al., 2017a) that fuses the features from multiple layers. A RPN proposes candidate object bounding boxes, and a ROI head is for box regression and classification (Ren et al., 2015). Compared to two-stage detectors, one-stage detectors propose predicted bounding boxes directly from the input image without the region proposal step, thus being more efficient (Jiao et al., 2019). One of the key challenges in object detection is to solve the two subtasks, namely localization and classification, coordinately. Localization requires the network to capture the object position accurately, while classification expects the network to extract the semantic information of the objects. Due to the layered structure of the CNNs, detail and position-accurate information resides in shallow but high-resolution layers; however, high-level and semantically strong information exists in deep but low-resolution layers (Long et al., 2014). Another key challenge is scale invariance that the detector is expected to be capable of handling different object scales (Liu et al., 2016). Feature Fusion is beneficial to object detectors in solving the two challenges. On one hand, through multi-layer fusion (Chen et al., 2020), detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, by fusing the results from different receptive fields (Yu & Koltun, 2016) or scales (Li et al., 2019) via dilated convolutions or different kernel sizes (Szegedy et al., 2014), objects can be detected in different scales, which improves the robustness of the model. In this paper, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks, complementing shallow and deep layers with semantic and position information. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection (Lin et al., 2014) minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. When applied on other frameworks, it also shows about +2.0% mAP improvement, which show its generalizability. The best resulting model achieves a 45.7% mAP on test-dev set. 2 RELATED WORK 2.1 MULTI-LAYER FEATURE FUSION FPN (Lin et al., 2017a) is the de facto multi-layer feature fusion module in modern CNNs to compensate for the position information loss in the deep layer and lack of semantic information in shallow layers. By upsampling the deep features and fusing them with shallow features through a top-down path, it enables the model to coordinate the heterogenous information and enhances the robustness. NAS-FPN (Ghiasi et al., 2019) designs a NAS (Zoph & Le, 2017) search space that covers all possible cross-layer connections, the result of which is a laterally repeatable FPN structure sharing the same dimensions between its input and output. FPG (Chen et al., 2020) proposes a multi-pathway feature pyramid, representing the feature scale-space as a regular grid of parallel bottom-up pathways fused by multi-directional lateral connections. EfficientDet (Tan et al., 2020) adopts a weighted bi-directional feature pyramid network for multi-layer feature fusion. M2Det (Zhao et al., 2018) presents a multi-level feature pyramid network, fusing the features with the same depth and dimension from multiple sequentially connected hourglass-like modules to generate multi-scale feature groups for prediction. Similar structures can also be seen in DSSD (Fu et al., 2017), TDM (Shrivastava et al., 2016), YOLOv3 (Redmon & Farhadi, 2018), and RefineDet (Zhang et al., 2017). 2.2 MULTI-BRANCH FEATURE FUSION In Inception (Szegedy et al., 2014), kernels on Inception Module branches have different sizes, which makes the output of the module contain different receptive fields. However, a large kernel contains a large number of parameters. Instead, dilated convolution allows a kernel to have an enlarged receptive field while keeping the parameter size unchanged. MCA (Yu & Koltun, 2016) utilizes dilated convolutions to systematically aggregate multi-scale contextual information. Going even further, TridentNet (Li et al., 2019) lets multiple convolutions share the same weight but with different dilation rates to explore a uniform representational capability. 3 MULTI-SCALE FUSION MODULE In this section, we present our Multi-Scale Fusion Module (MSFM) and the possible configurations when inserting it into existing frameworks. 3.1 MODULE DEFINITION An instantiation of MSFM is shown in Figure 1a. It can be formulated as follows: M(x) = x+ U{C[F1(S(x)), F2(S(x)), ..., Fn(S(x))]} where x is the module input, M(x) is the module output, S() is the squeeze module that makes the input x thinner, Fn() is the operation on n-th branch, C() is the combination function, and U() is the unsqueeze module which will restore the depth of the branch output to make it the same as x. The branch operation Fn() can be represented as below: Fn(a) = R −1 n (CGNn,i(CGNn,i−1(...(CGNn,1(Rn(a)))))) where a = S(x) is the result of squeeze module, Rn() is the resize function on n-th branch, CGNn,i is the i-th {Conv2D ⇒ GroupNormalization ⇒ NonLinearity} operation on n-th branch, R−1n is the resize function to restore the feature dimension (height and width). To make the module lightweight, we utilize a bottleneck-like (He et al., 2015) structure where the module input will first be thinned channel-wise, then fed into the branches. Branch input is resized using bilinear interpolation, and the same method is used when resizing the feature back to its original size. All the 3x3 convolutions on the branches have the padding=1 to keep the spatial dimension unchanged, and the number of the output channel is the same as that of the input channel as well. We choose ReLU as the nonlinearity activation in the MSFM. By default, MSFM is inserted in stages 2, 3, and 4 for ResNet backbones (He et al., 2015). 3.2 CONFIGURATIONS MSFM acts as a drop-in layer to existing frameworks. To show several possible configurations when inserting it into an object detector, we take as an example inserting it into a ResNet backbone. A Residual Bottleneck (He et al., 2015) in ResNet (He et al., 2016) is shown in Figure 1b. Some tunable hyperparameters we can configure are listed in Table 1. 4 EXPERIMENTS To evaluate the proposed module, we carry out experiments on object detection and instance segmentation tasks on COCO (Lin et al., 2014). Experimental results demonstrate that the MSFM can enhance the performance of common two-stage object detection frameworks with very light computational overhead. 4.1 EXPERIMENTS SETUP We perform hyperparameter tuning on Faster R-CNN with ResNet-50 FPN backbone (Ren et al., 2015). Unless otherwise stated, the backbone of the framework being mentioned is ResNet-50 FPN. To test the generalizability of MSFM, experiments are also conducted on Faster R-CNN with ResNet-101 FPN backbone (Ren et al., 2015), Mask R-CNN (He et al., 2017), Cascade R-CNN (Cai & Vasconcelos, 2017), Grid R-CNN (Lu et al., 2018), Dynamic R-CNN (Zhang et al., 2020), RetinaNet (Lin et al., 2017b), Reppoints (Yang et al., 2019), and Faster R-CNN with ResNet-50 FPN and Deformable Convolution on c3-c5 (Dai et al., 2017). We carry out our experiments on object detection and instance segmentation tasks on COCO (Lin et al., 2014), whose train set contains 118k images, minival set 5k images, and test-dev set 20k images. Mean average-precision (mAP) scores at different boxes and mask IoUs are adopted as the metrics when evaluating object detection and instance segmentation tasks. Our experiments are implemented with PyTorch (Paszke et al., 2019) and MMDetection (Chen et al., 2019). The input images are resized such that the shorter side is no longer than 800 pixels. and the longer side is no longer than 1333 pixels. All the models are trained on 8 GPUs with 2 images per GPU. The backbone of all models are pretrained on ImageNet classification dataset (Deng et al., 2009). Unless otherwise stated, all models are trained for 12 epochs using SGD with a weight decay of 0.0001, and a momentum of 0.9. The learning rate is set to 0.02 initially and decays by a factor of 10 at the 8th and 11th epochs. Learning rate linear warmup is adopted for first 500 steps with a warmup ratio of 0.001. 4.2 ABLATION STUDIES The ablation studies are performed on COCO 2017 (Lin et al., 2014) minival set. Unless otherwise stated, the MSFM in the following experiments has the default configuration: the insertion position is after conv3, the resize scales of three branches are 0.5, 0.7, and 1, respectively, the squeeze ratios are 16, 32, and 64 for stage 2, 3, and 4 of ResNet-50 (He et al., 2015), respectively, the number of groups in Group Normalization (Wu & He, 2018) is 16, only one {Conv2D, Group Normalization, Nonlinearity} operation is adopted on all branches, and the method to combine the branch results is add. 4.2.1 SCALES As can be seen from Table 2 Scales part, small scales (3S=[0.5, 0.7, 1], 5S=[0.5, 0.6, 0.7, 0.85, 1]) are helpful for detecting large objects, while large scales (3L=[1, 1.4, 2]) can enhance the detection of small objects. Compared to only using small or large scales, using compound scales (4=[0.5, 0.7, 1.4, 2], 5=[0.5, 0.7, 1, 1.4, 2]) turn out to be the optimal option, which can achieve better overall performance. This indicates that simultaneously generating and inserting detail and semantic information to the same layer is beneficial. 4.2.2 RATIOS We compare the effect of different squeeze ratios for different insertion positions, shown in Table 2 Ratios part. For position=after conv3, as we increase the ratios, the model will experience more information loss but less computational overhead; therefore, the ratios of 16, 32, and 64 for stages 2, 3 and 4, respectively, can be a good trade-off between information loss and computational overhead. For postion=after conv1 (norm group=8), MSFM is not sensitive to the change of ratios. We guess that it might be because the channel number is already so low after conv1 that changing its channel number will have no further effect. 4.2.3 NORM GROUP We explore the optimal group number for Group Normalization (Wu & He, 2018) when inserting into different positions. As we can see from the Norm group part in Table 2, the best group number for after conv3, after conv2 and after conv1 are 32, 4, and 8, respectively. Because the channel number is much larger for after conv3 compared to after conv1 and after conv2, the group number for Group Normalization (Wu & He, 2018) is much larger for after conv3. 4.2.4 CONV NUM All the experiments of Conv num in Table 2 are conducted with Norm group=32. 2* indicates that only the branches with scales larger than 1 have 2 {Conv2D, Group Normalization, Nonlinearity} operations. As we can see, the model with scale=[0.5, 0.7, 1, 1.4, 2] and conv num=2 achieves the best performance. What’s more, all the models of conv num=2 achieves better or at least comparable performance with that of conv num=2*, which indicates that a coordinate representational power among all the branches is important, even though they do not have the same receptive field size. 4.2.5 FUSION TYPE As two typical feature fusion operations, add and concatenation are alternatives. We compare their effects in the models of position=after conv1 and the ones of position=after conv3. The results in Table 2 show that concatenation is slightly better than add. 4.2.6 MULTI-POSITION INSERTION According to the experiment results and analysis above, we carry out a multi-position insertion ablation study, in order to see the effect of MSFM being inserted in multiple positions. All the experiments in this part have the following configurations for all the models: the resize scales of all the branches are 0.5, 0.7, 1, 1.4, and 2, the squeeze ratios for stage 2, 3, and 4 are 16, 32, and 64, respectively, the number of {Conv2D, Group Normalization, Nonlinearity} operations on all branches is 2, and the combination method is add. The number of groups used in Group Normalization (Wu & He, 2018) is 8, 4, and 32 for after conv1, after conv2, and after conv3, respectively. As can be seen from the results in Table 4, the combination of after conv2 and after conv3 turns out the best configuration, which we will use as the default configuration when applying the MSFM to other frameworks. 4.3 RESULTING MODELS To test the generalizability of the proposed MSFM, we apply it to multiple frameworks. The results are shown in Table 4 and Table 5. For a fair comparison, all baseline models are re-trained. As we can see, there is a consistent improvement in the following models when the MSFM is applied, which demonstrates that the MSFM can be used as a drop-in layer for many existing object detection frameworks. Notice that when MSFM is applied to Faster R-CNN with ResNet FPN backbone (Ren et al., 2015), the performance of the model even surpasses the one with ResNet-101 FPN backbone. It indicates that adding the MSFM to existing frameworks is more efficient than just adding more convolutional layers. We also train a Cascade R-CNN with ResNet-101 FPN backbone for 24 epochs using multi-scale training and submit the results to the evaluation server. The result in Table 6 shows it achieves a 45.7% mAP on the test-dev set. 5 CONCLUSION In this paper, we have presented a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Ablation studies have demonstrated that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model on Cascade R-CNN with ResNet-101 FPN backbone achieved a 45.7% mAP on COCO Object Detection test-dev set.
1. What is the focus of the paper regarding object detection? 2. What are the strengths of the proposed approach, particularly its performance gains? 3. What are the weaknesses of the paper, especially regarding its novelty and computational expense? 4. Do you have any concerns about the comparison with other related methods?
Review
Review The paper proposes a multi-scale feature fusion block and inserts the block into ResNet backbones for object detection. It is very similar to the inception block in IneceptionNets. The only difference is the proposed feature fusion contains feature map upsampling and downsampling (resize and resize^{-1}) for different branches. The paper has some merits as follows. The method has evaluated on different object detection frameworks, such as Faster R-CNN, Cascade R-CNN, Grid R-CNN, Dynamic R-CNN, RepPoints, etc. The method obvious performance gains on different frameworks with a few additional parameters. However, the flaws are obvious as follows. The novelty is very limited. Inception block is widely used in deep learning. The paper is only a small modification over Inception. The novelty is far below the bar of ICLR. The proposed feature fusion block is very computation expensive. The upsampling procedure makes the computation cost of the 3 × 3 convolutions very high. The paper only reports additional parameters but does not report the additional computation cost (e.g., additional FLOPs and running time). Related methods are not compared. At least, inserting inception blocks into ResNets can be compared.
ICLR
Title MSFM: Multi-Scale Fusion Module for Object Detection Abstract Feature fusion is beneficial to object detection tasks in two folds. On one hand, detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, objects can be detected in different scales, which improves the robustness of the framework. In this work, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model achieves a 45.7% mAP on test-dev set. Code will be available. N/A Feature fusion is beneficial to object detection tasks in two folds. On one hand, detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, objects can be detected in different scales, which improves the robustness of the framework. In this work, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model achieves a 45.7% mAP on test-dev set. Code will be available. 1 INTRODUCTION Object detection is one of the fundamental tasks in computer vision. It requires the detector to localize the objects in the image using bounding boxes and assign the correct category to each of them. In recent years, deep convolutional neural networks (CNNs) have seen great success in object detection, which can be divided into two categories: two-stage detectors, e.g., Faster R-CNN (Ren et al., 2015), and one-stage detectors, e.g., SSD (Liu et al., 2016). Two-stage detectors have high localization and recognition accuracy, while one-stage detectors achieve high inference speed (Jiao et al., 2019). A typical two-stage detector consists of a backbone, a neck, a Region Proposal Network (RPN), and a Region of Interest (ROI) head (Chen et al., 2019). A backbone is a feature extractor usually pre-trained on ImageNet dataset (Deng et al., 2009). A neck could be a Feature Pyramid Network (FPN) (Lin et al., 2017a) that fuses the features from multiple layers. A RPN proposes candidate object bounding boxes, and a ROI head is for box regression and classification (Ren et al., 2015). Compared to two-stage detectors, one-stage detectors propose predicted bounding boxes directly from the input image without the region proposal step, thus being more efficient (Jiao et al., 2019). One of the key challenges in object detection is to solve the two subtasks, namely localization and classification, coordinately. Localization requires the network to capture the object position accurately, while classification expects the network to extract the semantic information of the objects. Due to the layered structure of the CNNs, detail and position-accurate information resides in shallow but high-resolution layers; however, high-level and semantically strong information exists in deep but low-resolution layers (Long et al., 2014). Another key challenge is scale invariance that the detector is expected to be capable of handling different object scales (Liu et al., 2016). Feature Fusion is beneficial to object detectors in solving the two challenges. On one hand, through multi-layer fusion (Chen et al., 2020), detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, by fusing the results from different receptive fields (Yu & Koltun, 2016) or scales (Li et al., 2019) via dilated convolutions or different kernel sizes (Szegedy et al., 2014), objects can be detected in different scales, which improves the robustness of the model. In this paper, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks, complementing shallow and deep layers with semantic and position information. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection (Lin et al., 2014) minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. When applied on other frameworks, it also shows about +2.0% mAP improvement, which show its generalizability. The best resulting model achieves a 45.7% mAP on test-dev set. 2 RELATED WORK 2.1 MULTI-LAYER FEATURE FUSION FPN (Lin et al., 2017a) is the de facto multi-layer feature fusion module in modern CNNs to compensate for the position information loss in the deep layer and lack of semantic information in shallow layers. By upsampling the deep features and fusing them with shallow features through a top-down path, it enables the model to coordinate the heterogenous information and enhances the robustness. NAS-FPN (Ghiasi et al., 2019) designs a NAS (Zoph & Le, 2017) search space that covers all possible cross-layer connections, the result of which is a laterally repeatable FPN structure sharing the same dimensions between its input and output. FPG (Chen et al., 2020) proposes a multi-pathway feature pyramid, representing the feature scale-space as a regular grid of parallel bottom-up pathways fused by multi-directional lateral connections. EfficientDet (Tan et al., 2020) adopts a weighted bi-directional feature pyramid network for multi-layer feature fusion. M2Det (Zhao et al., 2018) presents a multi-level feature pyramid network, fusing the features with the same depth and dimension from multiple sequentially connected hourglass-like modules to generate multi-scale feature groups for prediction. Similar structures can also be seen in DSSD (Fu et al., 2017), TDM (Shrivastava et al., 2016), YOLOv3 (Redmon & Farhadi, 2018), and RefineDet (Zhang et al., 2017). 2.2 MULTI-BRANCH FEATURE FUSION In Inception (Szegedy et al., 2014), kernels on Inception Module branches have different sizes, which makes the output of the module contain different receptive fields. However, a large kernel contains a large number of parameters. Instead, dilated convolution allows a kernel to have an enlarged receptive field while keeping the parameter size unchanged. MCA (Yu & Koltun, 2016) utilizes dilated convolutions to systematically aggregate multi-scale contextual information. Going even further, TridentNet (Li et al., 2019) lets multiple convolutions share the same weight but with different dilation rates to explore a uniform representational capability. 3 MULTI-SCALE FUSION MODULE In this section, we present our Multi-Scale Fusion Module (MSFM) and the possible configurations when inserting it into existing frameworks. 3.1 MODULE DEFINITION An instantiation of MSFM is shown in Figure 1a. It can be formulated as follows: M(x) = x+ U{C[F1(S(x)), F2(S(x)), ..., Fn(S(x))]} where x is the module input, M(x) is the module output, S() is the squeeze module that makes the input x thinner, Fn() is the operation on n-th branch, C() is the combination function, and U() is the unsqueeze module which will restore the depth of the branch output to make it the same as x. The branch operation Fn() can be represented as below: Fn(a) = R −1 n (CGNn,i(CGNn,i−1(...(CGNn,1(Rn(a)))))) where a = S(x) is the result of squeeze module, Rn() is the resize function on n-th branch, CGNn,i is the i-th {Conv2D ⇒ GroupNormalization ⇒ NonLinearity} operation on n-th branch, R−1n is the resize function to restore the feature dimension (height and width). To make the module lightweight, we utilize a bottleneck-like (He et al., 2015) structure where the module input will first be thinned channel-wise, then fed into the branches. Branch input is resized using bilinear interpolation, and the same method is used when resizing the feature back to its original size. All the 3x3 convolutions on the branches have the padding=1 to keep the spatial dimension unchanged, and the number of the output channel is the same as that of the input channel as well. We choose ReLU as the nonlinearity activation in the MSFM. By default, MSFM is inserted in stages 2, 3, and 4 for ResNet backbones (He et al., 2015). 3.2 CONFIGURATIONS MSFM acts as a drop-in layer to existing frameworks. To show several possible configurations when inserting it into an object detector, we take as an example inserting it into a ResNet backbone. A Residual Bottleneck (He et al., 2015) in ResNet (He et al., 2016) is shown in Figure 1b. Some tunable hyperparameters we can configure are listed in Table 1. 4 EXPERIMENTS To evaluate the proposed module, we carry out experiments on object detection and instance segmentation tasks on COCO (Lin et al., 2014). Experimental results demonstrate that the MSFM can enhance the performance of common two-stage object detection frameworks with very light computational overhead. 4.1 EXPERIMENTS SETUP We perform hyperparameter tuning on Faster R-CNN with ResNet-50 FPN backbone (Ren et al., 2015). Unless otherwise stated, the backbone of the framework being mentioned is ResNet-50 FPN. To test the generalizability of MSFM, experiments are also conducted on Faster R-CNN with ResNet-101 FPN backbone (Ren et al., 2015), Mask R-CNN (He et al., 2017), Cascade R-CNN (Cai & Vasconcelos, 2017), Grid R-CNN (Lu et al., 2018), Dynamic R-CNN (Zhang et al., 2020), RetinaNet (Lin et al., 2017b), Reppoints (Yang et al., 2019), and Faster R-CNN with ResNet-50 FPN and Deformable Convolution on c3-c5 (Dai et al., 2017). We carry out our experiments on object detection and instance segmentation tasks on COCO (Lin et al., 2014), whose train set contains 118k images, minival set 5k images, and test-dev set 20k images. Mean average-precision (mAP) scores at different boxes and mask IoUs are adopted as the metrics when evaluating object detection and instance segmentation tasks. Our experiments are implemented with PyTorch (Paszke et al., 2019) and MMDetection (Chen et al., 2019). The input images are resized such that the shorter side is no longer than 800 pixels. and the longer side is no longer than 1333 pixels. All the models are trained on 8 GPUs with 2 images per GPU. The backbone of all models are pretrained on ImageNet classification dataset (Deng et al., 2009). Unless otherwise stated, all models are trained for 12 epochs using SGD with a weight decay of 0.0001, and a momentum of 0.9. The learning rate is set to 0.02 initially and decays by a factor of 10 at the 8th and 11th epochs. Learning rate linear warmup is adopted for first 500 steps with a warmup ratio of 0.001. 4.2 ABLATION STUDIES The ablation studies are performed on COCO 2017 (Lin et al., 2014) minival set. Unless otherwise stated, the MSFM in the following experiments has the default configuration: the insertion position is after conv3, the resize scales of three branches are 0.5, 0.7, and 1, respectively, the squeeze ratios are 16, 32, and 64 for stage 2, 3, and 4 of ResNet-50 (He et al., 2015), respectively, the number of groups in Group Normalization (Wu & He, 2018) is 16, only one {Conv2D, Group Normalization, Nonlinearity} operation is adopted on all branches, and the method to combine the branch results is add. 4.2.1 SCALES As can be seen from Table 2 Scales part, small scales (3S=[0.5, 0.7, 1], 5S=[0.5, 0.6, 0.7, 0.85, 1]) are helpful for detecting large objects, while large scales (3L=[1, 1.4, 2]) can enhance the detection of small objects. Compared to only using small or large scales, using compound scales (4=[0.5, 0.7, 1.4, 2], 5=[0.5, 0.7, 1, 1.4, 2]) turn out to be the optimal option, which can achieve better overall performance. This indicates that simultaneously generating and inserting detail and semantic information to the same layer is beneficial. 4.2.2 RATIOS We compare the effect of different squeeze ratios for different insertion positions, shown in Table 2 Ratios part. For position=after conv3, as we increase the ratios, the model will experience more information loss but less computational overhead; therefore, the ratios of 16, 32, and 64 for stages 2, 3 and 4, respectively, can be a good trade-off between information loss and computational overhead. For postion=after conv1 (norm group=8), MSFM is not sensitive to the change of ratios. We guess that it might be because the channel number is already so low after conv1 that changing its channel number will have no further effect. 4.2.3 NORM GROUP We explore the optimal group number for Group Normalization (Wu & He, 2018) when inserting into different positions. As we can see from the Norm group part in Table 2, the best group number for after conv3, after conv2 and after conv1 are 32, 4, and 8, respectively. Because the channel number is much larger for after conv3 compared to after conv1 and after conv2, the group number for Group Normalization (Wu & He, 2018) is much larger for after conv3. 4.2.4 CONV NUM All the experiments of Conv num in Table 2 are conducted with Norm group=32. 2* indicates that only the branches with scales larger than 1 have 2 {Conv2D, Group Normalization, Nonlinearity} operations. As we can see, the model with scale=[0.5, 0.7, 1, 1.4, 2] and conv num=2 achieves the best performance. What’s more, all the models of conv num=2 achieves better or at least comparable performance with that of conv num=2*, which indicates that a coordinate representational power among all the branches is important, even though they do not have the same receptive field size. 4.2.5 FUSION TYPE As two typical feature fusion operations, add and concatenation are alternatives. We compare their effects in the models of position=after conv1 and the ones of position=after conv3. The results in Table 2 show that concatenation is slightly better than add. 4.2.6 MULTI-POSITION INSERTION According to the experiment results and analysis above, we carry out a multi-position insertion ablation study, in order to see the effect of MSFM being inserted in multiple positions. All the experiments in this part have the following configurations for all the models: the resize scales of all the branches are 0.5, 0.7, 1, 1.4, and 2, the squeeze ratios for stage 2, 3, and 4 are 16, 32, and 64, respectively, the number of {Conv2D, Group Normalization, Nonlinearity} operations on all branches is 2, and the combination method is add. The number of groups used in Group Normalization (Wu & He, 2018) is 8, 4, and 32 for after conv1, after conv2, and after conv3, respectively. As can be seen from the results in Table 4, the combination of after conv2 and after conv3 turns out the best configuration, which we will use as the default configuration when applying the MSFM to other frameworks. 4.3 RESULTING MODELS To test the generalizability of the proposed MSFM, we apply it to multiple frameworks. The results are shown in Table 4 and Table 5. For a fair comparison, all baseline models are re-trained. As we can see, there is a consistent improvement in the following models when the MSFM is applied, which demonstrates that the MSFM can be used as a drop-in layer for many existing object detection frameworks. Notice that when MSFM is applied to Faster R-CNN with ResNet FPN backbone (Ren et al., 2015), the performance of the model even surpasses the one with ResNet-101 FPN backbone. It indicates that adding the MSFM to existing frameworks is more efficient than just adding more convolutional layers. We also train a Cascade R-CNN with ResNet-101 FPN backbone for 24 epochs using multi-scale training and submit the results to the evaluation server. The result in Table 6 shows it achieves a 45.7% mAP on the test-dev set. 5 CONCLUSION In this paper, we have presented a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Ablation studies have demonstrated that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model on Cascade R-CNN with ResNet-101 FPN backbone achieved a 45.7% mAP on COCO Object Detection test-dev set.
1. What is the focus of the paper, and what is the proposed approach's contribution to feature fusion? 2. What are the strengths of the proposed method, particularly its impact on detection tasks? 3. What are the weaknesses of the paper regarding its quality, clarity, and ablation studies? 4. How can the figures and notations be improved for better understanding? 5. Are there any suggestions for providing more evidence or confidence intervals for the metrics?
Review
Review This paper proposes a new general feature fusion operation, Multi-Scale Fusion Module (MSFM). By adding MSFM layers between feature extraction layers, it is observed that the detection result is improved with minor added parameters. Pros: Good to see new work to explore various of ways to perform feature fusion. Cons: Major comments: (1) The quality and clarify of the paper needs to be improved, for example, Table 3 has shifted horizontal line. (2) Ablation studies clarifies on the effects of the change of configurations, but does provide much evidence on why MSFM module helps with the detection task. Some minor comments: (1) Figure1a. could be further clarified by adding the notations mentioned in the equations to to the figure. (2) Good to report the variances/confidence-intervals of the metrics as well.
ICLR
Title MSFM: Multi-Scale Fusion Module for Object Detection Abstract Feature fusion is beneficial to object detection tasks in two folds. On one hand, detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, objects can be detected in different scales, which improves the robustness of the framework. In this work, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model achieves a 45.7% mAP on test-dev set. Code will be available. N/A Feature fusion is beneficial to object detection tasks in two folds. On one hand, detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, objects can be detected in different scales, which improves the robustness of the framework. In this work, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model achieves a 45.7% mAP on test-dev set. Code will be available. 1 INTRODUCTION Object detection is one of the fundamental tasks in computer vision. It requires the detector to localize the objects in the image using bounding boxes and assign the correct category to each of them. In recent years, deep convolutional neural networks (CNNs) have seen great success in object detection, which can be divided into two categories: two-stage detectors, e.g., Faster R-CNN (Ren et al., 2015), and one-stage detectors, e.g., SSD (Liu et al., 2016). Two-stage detectors have high localization and recognition accuracy, while one-stage detectors achieve high inference speed (Jiao et al., 2019). A typical two-stage detector consists of a backbone, a neck, a Region Proposal Network (RPN), and a Region of Interest (ROI) head (Chen et al., 2019). A backbone is a feature extractor usually pre-trained on ImageNet dataset (Deng et al., 2009). A neck could be a Feature Pyramid Network (FPN) (Lin et al., 2017a) that fuses the features from multiple layers. A RPN proposes candidate object bounding boxes, and a ROI head is for box regression and classification (Ren et al., 2015). Compared to two-stage detectors, one-stage detectors propose predicted bounding boxes directly from the input image without the region proposal step, thus being more efficient (Jiao et al., 2019). One of the key challenges in object detection is to solve the two subtasks, namely localization and classification, coordinately. Localization requires the network to capture the object position accurately, while classification expects the network to extract the semantic information of the objects. Due to the layered structure of the CNNs, detail and position-accurate information resides in shallow but high-resolution layers; however, high-level and semantically strong information exists in deep but low-resolution layers (Long et al., 2014). Another key challenge is scale invariance that the detector is expected to be capable of handling different object scales (Liu et al., 2016). Feature Fusion is beneficial to object detectors in solving the two challenges. On one hand, through multi-layer fusion (Chen et al., 2020), detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, by fusing the results from different receptive fields (Yu & Koltun, 2016) or scales (Li et al., 2019) via dilated convolutions or different kernel sizes (Szegedy et al., 2014), objects can be detected in different scales, which improves the robustness of the model. In this paper, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks, complementing shallow and deep layers with semantic and position information. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection (Lin et al., 2014) minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. When applied on other frameworks, it also shows about +2.0% mAP improvement, which show its generalizability. The best resulting model achieves a 45.7% mAP on test-dev set. 2 RELATED WORK 2.1 MULTI-LAYER FEATURE FUSION FPN (Lin et al., 2017a) is the de facto multi-layer feature fusion module in modern CNNs to compensate for the position information loss in the deep layer and lack of semantic information in shallow layers. By upsampling the deep features and fusing them with shallow features through a top-down path, it enables the model to coordinate the heterogenous information and enhances the robustness. NAS-FPN (Ghiasi et al., 2019) designs a NAS (Zoph & Le, 2017) search space that covers all possible cross-layer connections, the result of which is a laterally repeatable FPN structure sharing the same dimensions between its input and output. FPG (Chen et al., 2020) proposes a multi-pathway feature pyramid, representing the feature scale-space as a regular grid of parallel bottom-up pathways fused by multi-directional lateral connections. EfficientDet (Tan et al., 2020) adopts a weighted bi-directional feature pyramid network for multi-layer feature fusion. M2Det (Zhao et al., 2018) presents a multi-level feature pyramid network, fusing the features with the same depth and dimension from multiple sequentially connected hourglass-like modules to generate multi-scale feature groups for prediction. Similar structures can also be seen in DSSD (Fu et al., 2017), TDM (Shrivastava et al., 2016), YOLOv3 (Redmon & Farhadi, 2018), and RefineDet (Zhang et al., 2017). 2.2 MULTI-BRANCH FEATURE FUSION In Inception (Szegedy et al., 2014), kernels on Inception Module branches have different sizes, which makes the output of the module contain different receptive fields. However, a large kernel contains a large number of parameters. Instead, dilated convolution allows a kernel to have an enlarged receptive field while keeping the parameter size unchanged. MCA (Yu & Koltun, 2016) utilizes dilated convolutions to systematically aggregate multi-scale contextual information. Going even further, TridentNet (Li et al., 2019) lets multiple convolutions share the same weight but with different dilation rates to explore a uniform representational capability. 3 MULTI-SCALE FUSION MODULE In this section, we present our Multi-Scale Fusion Module (MSFM) and the possible configurations when inserting it into existing frameworks. 3.1 MODULE DEFINITION An instantiation of MSFM is shown in Figure 1a. It can be formulated as follows: M(x) = x+ U{C[F1(S(x)), F2(S(x)), ..., Fn(S(x))]} where x is the module input, M(x) is the module output, S() is the squeeze module that makes the input x thinner, Fn() is the operation on n-th branch, C() is the combination function, and U() is the unsqueeze module which will restore the depth of the branch output to make it the same as x. The branch operation Fn() can be represented as below: Fn(a) = R −1 n (CGNn,i(CGNn,i−1(...(CGNn,1(Rn(a)))))) where a = S(x) is the result of squeeze module, Rn() is the resize function on n-th branch, CGNn,i is the i-th {Conv2D ⇒ GroupNormalization ⇒ NonLinearity} operation on n-th branch, R−1n is the resize function to restore the feature dimension (height and width). To make the module lightweight, we utilize a bottleneck-like (He et al., 2015) structure where the module input will first be thinned channel-wise, then fed into the branches. Branch input is resized using bilinear interpolation, and the same method is used when resizing the feature back to its original size. All the 3x3 convolutions on the branches have the padding=1 to keep the spatial dimension unchanged, and the number of the output channel is the same as that of the input channel as well. We choose ReLU as the nonlinearity activation in the MSFM. By default, MSFM is inserted in stages 2, 3, and 4 for ResNet backbones (He et al., 2015). 3.2 CONFIGURATIONS MSFM acts as a drop-in layer to existing frameworks. To show several possible configurations when inserting it into an object detector, we take as an example inserting it into a ResNet backbone. A Residual Bottleneck (He et al., 2015) in ResNet (He et al., 2016) is shown in Figure 1b. Some tunable hyperparameters we can configure are listed in Table 1. 4 EXPERIMENTS To evaluate the proposed module, we carry out experiments on object detection and instance segmentation tasks on COCO (Lin et al., 2014). Experimental results demonstrate that the MSFM can enhance the performance of common two-stage object detection frameworks with very light computational overhead. 4.1 EXPERIMENTS SETUP We perform hyperparameter tuning on Faster R-CNN with ResNet-50 FPN backbone (Ren et al., 2015). Unless otherwise stated, the backbone of the framework being mentioned is ResNet-50 FPN. To test the generalizability of MSFM, experiments are also conducted on Faster R-CNN with ResNet-101 FPN backbone (Ren et al., 2015), Mask R-CNN (He et al., 2017), Cascade R-CNN (Cai & Vasconcelos, 2017), Grid R-CNN (Lu et al., 2018), Dynamic R-CNN (Zhang et al., 2020), RetinaNet (Lin et al., 2017b), Reppoints (Yang et al., 2019), and Faster R-CNN with ResNet-50 FPN and Deformable Convolution on c3-c5 (Dai et al., 2017). We carry out our experiments on object detection and instance segmentation tasks on COCO (Lin et al., 2014), whose train set contains 118k images, minival set 5k images, and test-dev set 20k images. Mean average-precision (mAP) scores at different boxes and mask IoUs are adopted as the metrics when evaluating object detection and instance segmentation tasks. Our experiments are implemented with PyTorch (Paszke et al., 2019) and MMDetection (Chen et al., 2019). The input images are resized such that the shorter side is no longer than 800 pixels. and the longer side is no longer than 1333 pixels. All the models are trained on 8 GPUs with 2 images per GPU. The backbone of all models are pretrained on ImageNet classification dataset (Deng et al., 2009). Unless otherwise stated, all models are trained for 12 epochs using SGD with a weight decay of 0.0001, and a momentum of 0.9. The learning rate is set to 0.02 initially and decays by a factor of 10 at the 8th and 11th epochs. Learning rate linear warmup is adopted for first 500 steps with a warmup ratio of 0.001. 4.2 ABLATION STUDIES The ablation studies are performed on COCO 2017 (Lin et al., 2014) minival set. Unless otherwise stated, the MSFM in the following experiments has the default configuration: the insertion position is after conv3, the resize scales of three branches are 0.5, 0.7, and 1, respectively, the squeeze ratios are 16, 32, and 64 for stage 2, 3, and 4 of ResNet-50 (He et al., 2015), respectively, the number of groups in Group Normalization (Wu & He, 2018) is 16, only one {Conv2D, Group Normalization, Nonlinearity} operation is adopted on all branches, and the method to combine the branch results is add. 4.2.1 SCALES As can be seen from Table 2 Scales part, small scales (3S=[0.5, 0.7, 1], 5S=[0.5, 0.6, 0.7, 0.85, 1]) are helpful for detecting large objects, while large scales (3L=[1, 1.4, 2]) can enhance the detection of small objects. Compared to only using small or large scales, using compound scales (4=[0.5, 0.7, 1.4, 2], 5=[0.5, 0.7, 1, 1.4, 2]) turn out to be the optimal option, which can achieve better overall performance. This indicates that simultaneously generating and inserting detail and semantic information to the same layer is beneficial. 4.2.2 RATIOS We compare the effect of different squeeze ratios for different insertion positions, shown in Table 2 Ratios part. For position=after conv3, as we increase the ratios, the model will experience more information loss but less computational overhead; therefore, the ratios of 16, 32, and 64 for stages 2, 3 and 4, respectively, can be a good trade-off between information loss and computational overhead. For postion=after conv1 (norm group=8), MSFM is not sensitive to the change of ratios. We guess that it might be because the channel number is already so low after conv1 that changing its channel number will have no further effect. 4.2.3 NORM GROUP We explore the optimal group number for Group Normalization (Wu & He, 2018) when inserting into different positions. As we can see from the Norm group part in Table 2, the best group number for after conv3, after conv2 and after conv1 are 32, 4, and 8, respectively. Because the channel number is much larger for after conv3 compared to after conv1 and after conv2, the group number for Group Normalization (Wu & He, 2018) is much larger for after conv3. 4.2.4 CONV NUM All the experiments of Conv num in Table 2 are conducted with Norm group=32. 2* indicates that only the branches with scales larger than 1 have 2 {Conv2D, Group Normalization, Nonlinearity} operations. As we can see, the model with scale=[0.5, 0.7, 1, 1.4, 2] and conv num=2 achieves the best performance. What’s more, all the models of conv num=2 achieves better or at least comparable performance with that of conv num=2*, which indicates that a coordinate representational power among all the branches is important, even though they do not have the same receptive field size. 4.2.5 FUSION TYPE As two typical feature fusion operations, add and concatenation are alternatives. We compare their effects in the models of position=after conv1 and the ones of position=after conv3. The results in Table 2 show that concatenation is slightly better than add. 4.2.6 MULTI-POSITION INSERTION According to the experiment results and analysis above, we carry out a multi-position insertion ablation study, in order to see the effect of MSFM being inserted in multiple positions. All the experiments in this part have the following configurations for all the models: the resize scales of all the branches are 0.5, 0.7, 1, 1.4, and 2, the squeeze ratios for stage 2, 3, and 4 are 16, 32, and 64, respectively, the number of {Conv2D, Group Normalization, Nonlinearity} operations on all branches is 2, and the combination method is add. The number of groups used in Group Normalization (Wu & He, 2018) is 8, 4, and 32 for after conv1, after conv2, and after conv3, respectively. As can be seen from the results in Table 4, the combination of after conv2 and after conv3 turns out the best configuration, which we will use as the default configuration when applying the MSFM to other frameworks. 4.3 RESULTING MODELS To test the generalizability of the proposed MSFM, we apply it to multiple frameworks. The results are shown in Table 4 and Table 5. For a fair comparison, all baseline models are re-trained. As we can see, there is a consistent improvement in the following models when the MSFM is applied, which demonstrates that the MSFM can be used as a drop-in layer for many existing object detection frameworks. Notice that when MSFM is applied to Faster R-CNN with ResNet FPN backbone (Ren et al., 2015), the performance of the model even surpasses the one with ResNet-101 FPN backbone. It indicates that adding the MSFM to existing frameworks is more efficient than just adding more convolutional layers. We also train a Cascade R-CNN with ResNet-101 FPN backbone for 24 epochs using multi-scale training and submit the results to the evaluation server. The result in Table 6 shows it achieves a 45.7% mAP on the test-dev set. 5 CONCLUSION In this paper, we have presented a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Ablation studies have demonstrated that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model on Cascade R-CNN with ResNet-101 FPN backbone achieved a 45.7% mAP on COCO Object Detection test-dev set.
1. What is the main contribution of the paper regarding scale-friendly feature fusion? 2. What are the strengths of the paper, particularly in its results and writing quality? 3. What are the weaknesses of the paper regarding its novelty and comparisons with other works? 4. How does the reviewer assess the significance of the contribution, particularly in relation to multi-scale backbone networks? 5. What additional comparisons should the paper include to provide a more comprehensive evaluation of the proposed method? 6. Is there any concern regarding the backbone used in the experiments, and how might this impact the analysis of the results?
Review
Review In this paper, the authors study the problem of scale-friendly feature fusion for object detection. Specifically, the authors propose to process features at each layer of a feature pyramid network at multiple scales and fuse them back into a single scale. To be specific, they resize features at a layer into multiple scales, process these rescaled features independently, rescale them back into the original scale and combine them with the original features. Strengths: Scale is an important problem in object detection and the paper addresses an important issue. Strong results showing significant improvements, around ~2AP, over baselines, including strong detectors like RepPoints. Overall, the paper is very well written. I didn't find any typos or grammatical errors, which is very rare for a thorough reviewer like me. Weaknesses: The novelty is limited. Multi-scale processing at a layer has been extensively studied with ResNext-type and inception-type architectures. The paper just takes such ideas and use them with FPN without any technical or theoretical insights or contributions. In other words, this appears to be just FPN with inception module. It would have been nicer for the authors to motivate how/whether the novelty goes beyond this. The paper does not make a comparison with multi-scale backbone networks such as ResNext. It has been shown that such multi-scale architectures improve the detection performance compared to ResNet type architectures, and this comparison is very crucial for the reader to grasp the significance of the contribution, if any. The paper does not make a comparison with respect to methods trying to address limitations of FPN. There are many papers that extend FPN to address scaling issues. It is very crucial for the reader to see how the proposed solution performs in comparison with those methods. It would have been nicer to see in Table 4 the details on the type of the backbone used. This might be a very crucial factor in analysing the differences in the gap among the different models. AFTER AUTHOR RESPONSE I have read the comments of the other reviewers, which revealed that all reviewers identified the same major issues with the paper (novelty and evaluation). The authors did not provide a rebuttal but kindly thanked the reviewers and stated their intention for improving the paper with the reviewer comments and submitting it for a future venue. Therefore, I changed my overall rating to rejection.
ICLR
Title MSFM: Multi-Scale Fusion Module for Object Detection Abstract Feature fusion is beneficial to object detection tasks in two folds. On one hand, detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, objects can be detected in different scales, which improves the robustness of the framework. In this work, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model achieves a 45.7% mAP on test-dev set. Code will be available. N/A Feature fusion is beneficial to object detection tasks in two folds. On one hand, detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, objects can be detected in different scales, which improves the robustness of the framework. In this work, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model achieves a 45.7% mAP on test-dev set. Code will be available. 1 INTRODUCTION Object detection is one of the fundamental tasks in computer vision. It requires the detector to localize the objects in the image using bounding boxes and assign the correct category to each of them. In recent years, deep convolutional neural networks (CNNs) have seen great success in object detection, which can be divided into two categories: two-stage detectors, e.g., Faster R-CNN (Ren et al., 2015), and one-stage detectors, e.g., SSD (Liu et al., 2016). Two-stage detectors have high localization and recognition accuracy, while one-stage detectors achieve high inference speed (Jiao et al., 2019). A typical two-stage detector consists of a backbone, a neck, a Region Proposal Network (RPN), and a Region of Interest (ROI) head (Chen et al., 2019). A backbone is a feature extractor usually pre-trained on ImageNet dataset (Deng et al., 2009). A neck could be a Feature Pyramid Network (FPN) (Lin et al., 2017a) that fuses the features from multiple layers. A RPN proposes candidate object bounding boxes, and a ROI head is for box regression and classification (Ren et al., 2015). Compared to two-stage detectors, one-stage detectors propose predicted bounding boxes directly from the input image without the region proposal step, thus being more efficient (Jiao et al., 2019). One of the key challenges in object detection is to solve the two subtasks, namely localization and classification, coordinately. Localization requires the network to capture the object position accurately, while classification expects the network to extract the semantic information of the objects. Due to the layered structure of the CNNs, detail and position-accurate information resides in shallow but high-resolution layers; however, high-level and semantically strong information exists in deep but low-resolution layers (Long et al., 2014). Another key challenge is scale invariance that the detector is expected to be capable of handling different object scales (Liu et al., 2016). Feature Fusion is beneficial to object detectors in solving the two challenges. On one hand, through multi-layer fusion (Chen et al., 2020), detail and position information can be combined with semantic information when high and low-resolution features from shallow and deep layers are fused. On the other hand, by fusing the results from different receptive fields (Yu & Koltun, 2016) or scales (Li et al., 2019) via dilated convolutions or different kernel sizes (Szegedy et al., 2014), objects can be detected in different scales, which improves the robustness of the model. In this paper, we present a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Specifically, the input of the module will be resized into different scales on which position and semantic information will be processed, and then they will be rescaled back and combined with the module input. The MSFM is lightweight and can be used as a drop-in layer to many existing object detection frameworks, complementing shallow and deep layers with semantic and position information. Experiments show that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection (Lin et al., 2014) minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. When applied on other frameworks, it also shows about +2.0% mAP improvement, which show its generalizability. The best resulting model achieves a 45.7% mAP on test-dev set. 2 RELATED WORK 2.1 MULTI-LAYER FEATURE FUSION FPN (Lin et al., 2017a) is the de facto multi-layer feature fusion module in modern CNNs to compensate for the position information loss in the deep layer and lack of semantic information in shallow layers. By upsampling the deep features and fusing them with shallow features through a top-down path, it enables the model to coordinate the heterogenous information and enhances the robustness. NAS-FPN (Ghiasi et al., 2019) designs a NAS (Zoph & Le, 2017) search space that covers all possible cross-layer connections, the result of which is a laterally repeatable FPN structure sharing the same dimensions between its input and output. FPG (Chen et al., 2020) proposes a multi-pathway feature pyramid, representing the feature scale-space as a regular grid of parallel bottom-up pathways fused by multi-directional lateral connections. EfficientDet (Tan et al., 2020) adopts a weighted bi-directional feature pyramid network for multi-layer feature fusion. M2Det (Zhao et al., 2018) presents a multi-level feature pyramid network, fusing the features with the same depth and dimension from multiple sequentially connected hourglass-like modules to generate multi-scale feature groups for prediction. Similar structures can also be seen in DSSD (Fu et al., 2017), TDM (Shrivastava et al., 2016), YOLOv3 (Redmon & Farhadi, 2018), and RefineDet (Zhang et al., 2017). 2.2 MULTI-BRANCH FEATURE FUSION In Inception (Szegedy et al., 2014), kernels on Inception Module branches have different sizes, which makes the output of the module contain different receptive fields. However, a large kernel contains a large number of parameters. Instead, dilated convolution allows a kernel to have an enlarged receptive field while keeping the parameter size unchanged. MCA (Yu & Koltun, 2016) utilizes dilated convolutions to systematically aggregate multi-scale contextual information. Going even further, TridentNet (Li et al., 2019) lets multiple convolutions share the same weight but with different dilation rates to explore a uniform representational capability. 3 MULTI-SCALE FUSION MODULE In this section, we present our Multi-Scale Fusion Module (MSFM) and the possible configurations when inserting it into existing frameworks. 3.1 MODULE DEFINITION An instantiation of MSFM is shown in Figure 1a. It can be formulated as follows: M(x) = x+ U{C[F1(S(x)), F2(S(x)), ..., Fn(S(x))]} where x is the module input, M(x) is the module output, S() is the squeeze module that makes the input x thinner, Fn() is the operation on n-th branch, C() is the combination function, and U() is the unsqueeze module which will restore the depth of the branch output to make it the same as x. The branch operation Fn() can be represented as below: Fn(a) = R −1 n (CGNn,i(CGNn,i−1(...(CGNn,1(Rn(a)))))) where a = S(x) is the result of squeeze module, Rn() is the resize function on n-th branch, CGNn,i is the i-th {Conv2D ⇒ GroupNormalization ⇒ NonLinearity} operation on n-th branch, R−1n is the resize function to restore the feature dimension (height and width). To make the module lightweight, we utilize a bottleneck-like (He et al., 2015) structure where the module input will first be thinned channel-wise, then fed into the branches. Branch input is resized using bilinear interpolation, and the same method is used when resizing the feature back to its original size. All the 3x3 convolutions on the branches have the padding=1 to keep the spatial dimension unchanged, and the number of the output channel is the same as that of the input channel as well. We choose ReLU as the nonlinearity activation in the MSFM. By default, MSFM is inserted in stages 2, 3, and 4 for ResNet backbones (He et al., 2015). 3.2 CONFIGURATIONS MSFM acts as a drop-in layer to existing frameworks. To show several possible configurations when inserting it into an object detector, we take as an example inserting it into a ResNet backbone. A Residual Bottleneck (He et al., 2015) in ResNet (He et al., 2016) is shown in Figure 1b. Some tunable hyperparameters we can configure are listed in Table 1. 4 EXPERIMENTS To evaluate the proposed module, we carry out experiments on object detection and instance segmentation tasks on COCO (Lin et al., 2014). Experimental results demonstrate that the MSFM can enhance the performance of common two-stage object detection frameworks with very light computational overhead. 4.1 EXPERIMENTS SETUP We perform hyperparameter tuning on Faster R-CNN with ResNet-50 FPN backbone (Ren et al., 2015). Unless otherwise stated, the backbone of the framework being mentioned is ResNet-50 FPN. To test the generalizability of MSFM, experiments are also conducted on Faster R-CNN with ResNet-101 FPN backbone (Ren et al., 2015), Mask R-CNN (He et al., 2017), Cascade R-CNN (Cai & Vasconcelos, 2017), Grid R-CNN (Lu et al., 2018), Dynamic R-CNN (Zhang et al., 2020), RetinaNet (Lin et al., 2017b), Reppoints (Yang et al., 2019), and Faster R-CNN with ResNet-50 FPN and Deformable Convolution on c3-c5 (Dai et al., 2017). We carry out our experiments on object detection and instance segmentation tasks on COCO (Lin et al., 2014), whose train set contains 118k images, minival set 5k images, and test-dev set 20k images. Mean average-precision (mAP) scores at different boxes and mask IoUs are adopted as the metrics when evaluating object detection and instance segmentation tasks. Our experiments are implemented with PyTorch (Paszke et al., 2019) and MMDetection (Chen et al., 2019). The input images are resized such that the shorter side is no longer than 800 pixels. and the longer side is no longer than 1333 pixels. All the models are trained on 8 GPUs with 2 images per GPU. The backbone of all models are pretrained on ImageNet classification dataset (Deng et al., 2009). Unless otherwise stated, all models are trained for 12 epochs using SGD with a weight decay of 0.0001, and a momentum of 0.9. The learning rate is set to 0.02 initially and decays by a factor of 10 at the 8th and 11th epochs. Learning rate linear warmup is adopted for first 500 steps with a warmup ratio of 0.001. 4.2 ABLATION STUDIES The ablation studies are performed on COCO 2017 (Lin et al., 2014) minival set. Unless otherwise stated, the MSFM in the following experiments has the default configuration: the insertion position is after conv3, the resize scales of three branches are 0.5, 0.7, and 1, respectively, the squeeze ratios are 16, 32, and 64 for stage 2, 3, and 4 of ResNet-50 (He et al., 2015), respectively, the number of groups in Group Normalization (Wu & He, 2018) is 16, only one {Conv2D, Group Normalization, Nonlinearity} operation is adopted on all branches, and the method to combine the branch results is add. 4.2.1 SCALES As can be seen from Table 2 Scales part, small scales (3S=[0.5, 0.7, 1], 5S=[0.5, 0.6, 0.7, 0.85, 1]) are helpful for detecting large objects, while large scales (3L=[1, 1.4, 2]) can enhance the detection of small objects. Compared to only using small or large scales, using compound scales (4=[0.5, 0.7, 1.4, 2], 5=[0.5, 0.7, 1, 1.4, 2]) turn out to be the optimal option, which can achieve better overall performance. This indicates that simultaneously generating and inserting detail and semantic information to the same layer is beneficial. 4.2.2 RATIOS We compare the effect of different squeeze ratios for different insertion positions, shown in Table 2 Ratios part. For position=after conv3, as we increase the ratios, the model will experience more information loss but less computational overhead; therefore, the ratios of 16, 32, and 64 for stages 2, 3 and 4, respectively, can be a good trade-off between information loss and computational overhead. For postion=after conv1 (norm group=8), MSFM is not sensitive to the change of ratios. We guess that it might be because the channel number is already so low after conv1 that changing its channel number will have no further effect. 4.2.3 NORM GROUP We explore the optimal group number for Group Normalization (Wu & He, 2018) when inserting into different positions. As we can see from the Norm group part in Table 2, the best group number for after conv3, after conv2 and after conv1 are 32, 4, and 8, respectively. Because the channel number is much larger for after conv3 compared to after conv1 and after conv2, the group number for Group Normalization (Wu & He, 2018) is much larger for after conv3. 4.2.4 CONV NUM All the experiments of Conv num in Table 2 are conducted with Norm group=32. 2* indicates that only the branches with scales larger than 1 have 2 {Conv2D, Group Normalization, Nonlinearity} operations. As we can see, the model with scale=[0.5, 0.7, 1, 1.4, 2] and conv num=2 achieves the best performance. What’s more, all the models of conv num=2 achieves better or at least comparable performance with that of conv num=2*, which indicates that a coordinate representational power among all the branches is important, even though they do not have the same receptive field size. 4.2.5 FUSION TYPE As two typical feature fusion operations, add and concatenation are alternatives. We compare their effects in the models of position=after conv1 and the ones of position=after conv3. The results in Table 2 show that concatenation is slightly better than add. 4.2.6 MULTI-POSITION INSERTION According to the experiment results and analysis above, we carry out a multi-position insertion ablation study, in order to see the effect of MSFM being inserted in multiple positions. All the experiments in this part have the following configurations for all the models: the resize scales of all the branches are 0.5, 0.7, 1, 1.4, and 2, the squeeze ratios for stage 2, 3, and 4 are 16, 32, and 64, respectively, the number of {Conv2D, Group Normalization, Nonlinearity} operations on all branches is 2, and the combination method is add. The number of groups used in Group Normalization (Wu & He, 2018) is 8, 4, and 32 for after conv1, after conv2, and after conv3, respectively. As can be seen from the results in Table 4, the combination of after conv2 and after conv3 turns out the best configuration, which we will use as the default configuration when applying the MSFM to other frameworks. 4.3 RESULTING MODELS To test the generalizability of the proposed MSFM, we apply it to multiple frameworks. The results are shown in Table 4 and Table 5. For a fair comparison, all baseline models are re-trained. As we can see, there is a consistent improvement in the following models when the MSFM is applied, which demonstrates that the MSFM can be used as a drop-in layer for many existing object detection frameworks. Notice that when MSFM is applied to Faster R-CNN with ResNet FPN backbone (Ren et al., 2015), the performance of the model even surpasses the one with ResNet-101 FPN backbone. It indicates that adding the MSFM to existing frameworks is more efficient than just adding more convolutional layers. We also train a Cascade R-CNN with ResNet-101 FPN backbone for 24 epochs using multi-scale training and submit the results to the evaluation server. The result in Table 6 shows it achieves a 45.7% mAP on the test-dev set. 5 CONCLUSION In this paper, we have presented a Multi-Scale Fusion Module (MSFM) that extracts both detail and semantical information from a single input but at different scales within the same layer. Ablation studies have demonstrated that MSFM can bring +2.5% mAP improvement with only 2.4M extra parameters on Faster R-CNN with ResNet-50 FPN backbone on COCO Object Detection minival set, outperforming that with ResNet-101 FPN backbone without the module which obtains +2.0% mAP with 19.0M extra parameters. The best resulting model on Cascade R-CNN with ResNet-101 FPN backbone achieved a 45.7% mAP on COCO Object Detection test-dev set.
1. What is the main contribution of the paper, and how does it differ from previous approaches? 2. What are the strengths and weaknesses of the proposed method, particularly in comparison to recent works? 3. How does the paper's experimental design and ablation study support or limit its conclusions? 4. Are there any questions regarding the clarity and illustration of symbols used in the paper?
Review
Review This paper proposes to obtain multi-scale features by `resize -> convolution -> resize (inverse)’. Extensive experimental results on COCO validate the effectiveness of the proposed approach. Pros: Experimental results on the widely used benchmarking dataset. Comparison with recent approaches. Paper is easy to understand, simply because the proposed method is simple. Cons: Lack of novelty. The use of multi-scale features is not new [a, b]. The formulation in this paper is not very different from that in [b]. It has been used for lots of applications. In object detection, dilated conv in TridentNet is not the only approach using multiple branches. Approaches like [c, d] also used multi-scale-multi-layer features by resizing. [a] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Spatial pyramid pooling in deep convolutional networks for visual recognition." IEEE transactions on pattern analysis and machine intelligence 37, no. 9 (2015): 1904-1916. [b] Yang, Wei, Shuang Li, Wanli Ouyang, Hongsheng Li, and Xiaogang Wang. "Learning feature pyramids for human pose estimation." In proceedings of the IEEE international conference on computer vision, pp. 1281-1290. 2017. [c] Gidaris, Spyros, and Nikos Komodakis. "Object detection via a multi-region and semantic segmentation-aware cnn model." In Proceedings of the IEEE international conference on computer vision, pp. 1134-1142. 2015. [d] Zeng, Xingyu, Wanli Ouyang, Junjie Yan, Hongsheng Li, Tong Xiao, Kun Wang, Yu Liu et al. "Crafting gbd-net for object detection." IEEE transactions on pattern analysis and machine intelligence 40, no. 9 (2017): 2109-2123. The ablation study in the experimental results did not compare with existing works, like TridentNet, and [c, d] to justify why another multi-scale approach is needed. Symbols are not illustrated well (Authors need not answer this in the rebuttal but need to revise in the revised version). S() is the squeeze module: What is the meaning of squeeze module? There are no common definitions on makes the input x thinner’, combination function’, unsqueeze module’.
ICLR
Title Is a Caption Worth a Thousand Images? A Study on Representation Learning Abstract The development of CLIP (Radford et al., 2021) has sparked a debate on whether adding language supervision can yield vision models with more transferable representations than traditional image-only methods. Our work studies this question through a carefully controlled comparison of two approaches, in terms of their ability to learn representations that generalize to downstream classification tasks. We find that when the pre-training data meets certain criteria—it is sufficiently large and contains descriptive captions with low variability—-image-only methods do not match CLIP’s performance even when they are trained with more image data. However, contrary to what one might expect, there are practical settings in which these criteria are not met, wherein added supervision through captions is actually detrimental. Motivated by our findings, we devise simple data and algorithmic interventions to improve the transfer performance of CLIP-style models. 1 INTRODUCTION Image-based contrastive learning approaches have shown promise in building models that generalize beyond the data distributions they are trained on (Wu et al., 2018; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Chen et al., 2020b; Caron et al., 2021). By leveraging large (unlabelled) data sources via self-supervised training, these models learn representations that transfer to diverse image classification tasks—more so than their supervised counterparts (Ericsson et al., 2021). Recently, Radford et al. (2021) showed that a different approach—contrastive learning with language supervision—can yield models (CLIP) with remarkable transfer capabilities. This development has garnered significant interest in the vision and natural language processing communities alike, leading to a debate on the utility of multi-modality in visual representation learning (Zhai et al., 2022; Devillers et al., 2021; Fang et al., 2022). Our work focuses on a specific question within this debate: Does added language supervision lead to more transferable visual representations than using images alone? It might seem like the answer to this question is obvious. After all, CLIP utilized caption information unavailable to traditional image-based approaches and showed substantial gains over them (Radford et al., 2021). However, CLIP is drastically different from these approaches in many ways, from training data to fine-grained implementation choices, which makes it difficult to isolate the contribution of language supervision (see Section 5). Further, recent studies on CLIP’s zero-shot classification and robustness properties cast doubt on whether adding language supervision is always beneficial (Fang et al., 2022). Resolving the aforementioned debate thus requires a carefully controlled comparison of the two approaches in which the only difference is the form of supervision. Our contributions. We devise a methodology to assess the utility of language supervision in CLIP1 from a visual representation learning standpoint. To do so, we recognize that CLIP pretraining and popular image-based methods share the same underlying primitive of contrastive learning. Specifically, Radford et al. (2021)’s approach is strikingly similar to SimCLR (Chen et al., 2020a). The only irreducible difference between them is whether supervision is provided to the 1We use CLIP to refer to models trained with Radford et al. (2021)’s approach, not their pre-trained model. model via image augmentations or image-caption matching (see Figure 1)—which is precisely the quantity we want to study. Thus, we can disentangle the effect of language supervision on visual representations by comparing matched versions of SimCLR and CLIP (trained from scratch). Our focus, in particular, is on how well the learned representations transfer to varied image classification tasks. We find that the picture is nuanced and depends on three properties of the pre-training data: 1. When the scale of the dataset is sufficiently large, CLIP’s visual representations indeed transfer better than their matched image-only SimCLR counterparts. In fact, this gap is not bridged by training SimCLR with more (image) data, suggesting that a caption can be worth more than any number of images. However, in the low-data regime, language supervision actually hurts model performance both in and out-of-distribution. 2. The descriptiveness (Kreiss et al., 2021) of captions—i.e., the extent to which they refer to what is contained in an image—directly determines how well CLIP models transfer. In fact, we find that a single descriptive image-caption pair (e.g., from COCO (Lin et al., 2014)) is worth five less descriptive, uncurated captions (e.g., from YFCC (Thomee et al., 2016)). 3. The variability of captions (e.g. stylistic or lexical) within a dataset can impair CLIP’s performance. We find that a modification to standard CLIP training—performing text augmentations by sampling from a pool of captions for each image—can alleviate this drop. These properties have inter-twined effects on CLIP’s performance: e.g., dataset scale can, to some extent, compensate for less-descriptive and/or varied captions. Guided by our findings, we devise simple datasets interventions that can lead to more-transferrable CLIP models: (i) filtering out lowquality captions with a text-based classifier, and (ii) applying data augmentation to captions by paraphrasing them using pre-trained language models. 2 AN APPLES-TO-APPLES COMPARISON Prior works have studied image-only and image-language pre-training methods in isolation (Wu et al., 2018; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Chen et al., 2020b;b; Chen & He, 2021; Caron et al., 2021; Radford et al., 2021) and side-by-side (Desai & Johnson, 2021; Devillers et al., 2021; Fang et al., 2022). Yet, they provide incomplete (and often contradictory) answers to our motivating question of the value of language supervision relative to using images alone (Section 5). Crucially, this is due to various confounders such as: (i) bespoke algorithmic optimizations within the two methods, and (ii) differing pre-training datasets. In this section, we outline a series of steps that we take to mitigate these confounders and compare the two methods on equal footing. 2.1 FINDING COMMON GROUND Our approach for studying the value of language supervision is guided by the following insight: CLIP pre-training is strikingly similar to the popular image-only SimCLR method (Chen et al., 2020a)2. Both methods rely on the same algorithmic primitive of contrastive learning, which we illustrate in Figure 1. Specifically, the (CLIP/SimCLR) model is trained cross-entropy based objective, which for a given pair (x, x+) of positive examples with with associated negatives N is: ℓ = − log exp(sim(z, z+)/τ)∑ n∈N∪{z+} exp(sim(z, zn)/τ) , where z = g(ϕ(x)) and z+/n = g′(ϕ′(x+/n)), (1) sim is cosine similarity, ϕ/ϕ′ are encoders, and g/g′ are projection heads. Positive examples x+ are obtained through a transformation of the image x, i.e., x+ ∼ T (x)—such as image augmentations (e.g., rotations or crops) in SimCLR and captions in CLIP. Observe that this difference in T (·) between CLIP and SimCLR corresponds exactly to whether the model is trained with language, which is the quantity we want to study. Thus, to isolate the role of added language supervision, we can compare the downstream performance of matched CLIP and SimCLR models. To this end, we must take some steps to make their implementatations consistent: • Datasets: Typically, CLIP and SimCLR are trained on different datasets, as the former requires image-caption pairs, while the latter can leverage any image data. To control for the effect of the data distribution, we pre-train both models from scratch on the same data. • Architecture: We use the ResNet-50 (He et al., 2016) architecture as the image encoder for both methods, and a Transformer (Vaswani et al., 2017) as the text encoder in CLIP. We also extensively tune hyperparameters for both methods (Appendix A.3). • Augmentations: Both methods apply data augmentations to the image x itself at each training step. However, the augmentations used in SimCLR (resize, crop, flip, jitter, blur, grayscale) are far more sophisticated than those in CLIP (resize and crop). We remove this confounder by using SimCLR augmentations unless otherwise specified. • Transformation stochasticity: The two methods differ in how they obtain x+, not just due to the choice of T (x) but also the generative process itself. In SimCLR , x+ is a new random draw from T (x) in every batch, while for CLIP, it is a single fixed caption. Perfectly matching them requires training CLIP by sampling a fresh caption x+ for each image at each iteration. We will refer to this stochastic version of CLIP as CLIPS. Mismatches. Despite our efforts to match CLIP with SimCLR, some inconsistencies remain– partly due to their differing input modalities. In particular, CLIP (and CLIPS): (i) Processes T (x) using a text transformer rather than SimCLR’s ResNet-50. (ii) Does not share weights between the encoders processing x and T (x) because they corre- spond to different modalities, unlike SimCLR. (iii) Uses a linear projection head g/g′ instead of SimCLR’s MLP, which we allow as Radford et al. (2021) showed that this choice does not affect CLIP’s performance. (iv) Only uses other examples in the batch from the same modality as negatives. Thus CLIP has half the number of negatives compared to SimCLR, which also uses transformed versions of other examples in the batch (i.e. both x̂ and x̂+) as negatives. We now assess how the representations learned by our matched CLIP and SimCLR models compare. In particular, we measure how well their representations transfer to the downstream tasks from Kornblith et al. (2019). Akin to (Radford et al., 2021), we focus on the fixed-feature setting, where we freeze the weights of a given model and then train a linear probe using task data (see Appendix A). 2.2 A CASE STUDY We begin by comparing CLIP and SimCLR models trained on the MS-COCO dataset (Lin et al., 2014) (henceforth referred to as COCO), which contains ∼120K images with multi-object labels. Each image has five human-provided captions, collected post-hoc by Chen et al. (2015) using Mechanical Turk. Annotators were given detailed instructions on how to caption an image such as to describe only the important parts of the image and not to use proper names. We use COCO as our 2Other image-based methods (He et al., 2020; Chen et al., 2020b; Chen & He, 2021; Caron et al., 2021) have optimizations that are not present in CLIP. starting point for two reasons. First, we can assess the utility of language supervision in the ideal setting where the captions are of fairly high quality due to the careful curation process. Second, we can approximate CLIPS3 by sampling from the available set of five captions per image. Captions (often) help on COCO. In Table 1, we compare various COCO pre-trained models (supervised, SimCLR, CLIP/CLIPS) in terms of the accuracy of a linear probe on: (i) COCO classification (in distribution), and (ii) transfer tasks. Note that to contrast image-only and image-language supervision, the “right” comparison is between SimCLR and CLIPS: they are matched (to the best of our abilities) in terms of dataset, architecture, augmentations and stochasticity. We find that: 3 THE IMPACT OF PRE-TRAINING DATA Our analysis of COCO shows that language supervision can be beneficial over using images alone. That being said, the datasets that CLIP is typically trained on differ, both in scale and quality, from COCO. For instance, COCO captions were collected post-hoc under controlled settings, which is markedly different from the automatic scraping procedure used to gather data at scale. Thus, we shift our focus to two frequently-used (Ilharco et al., 2021) CLIP training datasets: ConceptualCaptions (Sharma et al., 2018) (CC) contains ∼3.3M images harvested from web, with their ALT-text attributes as captions. The data was filtered for text quality—e.g., well-formed captions that mention at least one object found via the Google Cloud Vision API. Furthermore, all proper nouns in the captions were hypernymized (e.g., ”Justin Timberlake” becomes ”pop artist”). Yahoo Flickr Creative Commons (Thomee et al., 2016) (YFCC): This dataset has ∼ 99.2M images from Flickr, along with their posted titles as captions with no post-processing. 3We henceforth overload notation and use CLIPS to denote: (i) the idealized stochastic version of CLIP, which samples from infinite captions per image, and (ii) our approximation of it with a finite set of captions. Do captions still help? We start by comparing the transfer performance of CLIP and SimCLR on 100K subsets of COCO/CC/YFCC in Figure 2(left). We observe that SimCLR’s transfer capabilities do not vary much across pre-training datasets, while CLIP’s performance is highly sensitive to them. With 100K samples from CC/YFCC, using CLIP is worse than image-only pre-training via SimCLR—unlike what we see for COCO. The sensitivity of CLIP to pre-training data. Inspecting dataset samples (Figure 3) yields a possible explanation for this sensitivity. The three datasets differ not just in scale and image diversity, but also the extent to which captions: (i) describe visually salient aspects of the image, and (ii) vary across images (e.g., in style and wording). For instance, COCO captions are homogenous and descriptive, while YFCC ones vary and are often complementary to the image. We now study the effect these dataset properties—scale, descriptiveness, and variability—have on CLIP’s performance. 3.1 SCALE MATTERS A major appeal of contrastive learning methods is that they can leverage the vast amounts of unlabeled data available on the Internet. Thus, it is natural to ask how different forms of contrastive supervision benefit from added pre-training data. We may expect image-only methods to perform worse for smaller datasets as they are less likely to encounter (augmented) images which are similar. We might further expect image-language models to perform more favorably in this setting since they receive richer supervision. To test whether this is the case, we compare CLIP and SimCLR models trained on datasets of varying sizes: 10-100K samples for COCO, and 100K-2M for CC/YFCC. Our results in Figure 2(left) deviate from our earlier expectations. First, beyond a certain point, SimCLR’s transfer performance improves only marginally with additional data. While surprising, similar effects have been noted previously (Tian et al., 2021; Cole et al., 2022), especially when the data is uncurated (e.g., YFCC) (Tian et al., 2021). Second, in the low-data regime (<50K/200K/500K for COCO/CC/YFCC), training with language actually hurts the models’ transfer performance. In fact, (data) scale seems to be essential to benefit from language supervision. With sufficient data, CLIP outperforms SimCLR on all three datasets. This gap remains even if we train SimCLR with extra data, indicating that captions can be worth more than any number of images. 3.2 THE IMPORTANCE OF DESCRIPTIVE CAPTIONS Prior work in linguistics and accessibility has drawn a distinction between image “descriptions” and “captions” (Berger & Dibb, 2003; Chandler, 2007; Hodosh et al., 2013; Bernardi et al., 2016; van Miltenburg, 2020; Kreiss et al., 2021; Dognin et al., 2022; Hutchinson et al., 2022). In particular, Bernardi et al. (2016) define descriptions as texts that “verbalize what can be seen in the image, i.e., they refer to the objects, actions, and attributes depicted, mention the scene type, etc.”. In contrast, Panofsky (1939) suggest that a typical caption “provides personal, cultural, or historical context for the image.” This line of work suggests that COCO captions are more descriptive due to the decontextualization of the image and strict instructions provided to the annotators during the caption generation process (Kreiss et al., 2021). In contrast, Flickr captions (e.g., in CC/YFCC) tend to contain information that is complementary to the image Alikhani et al. (2020) since people tend not to not restate what can already be observed in the photographs they post (Hodosh et al., 2013). Now to perform well on downstream classification tasks, we ideally want model representations that encode salient image objects. Recall that in contrastively-trained models, the learned representations are determined by the transformation T (x) (captions for CLIP). This suggests a hypothesis: pretraining CLIP with descriptive captions will yield more transferrable (vision) representations. To test this, we need to quantify the descriptiveness of a caption. Since doing so precisely is infeasible, we approximate descriptiveness using a pre-trained caption-scoring model. Specifically, we leverage the BLIP model (Li et al., 2022) which has shown state-of-the-art performance on image-based text retrieval. We then measure the average score assigned by BLIP to dataset captions matching their corresponding images—see Figure 2(right). As expected based on our earlier subjective assessment as well as prior work (Hodosh et al., 2013; Kreiss et al., 2021), we indeed find that the caption descriptiveness of COCO > CC > YFCC (see Appendix B.1 for a discussion of relevant vs. noisy captions.) Furthermore, we see that the descriptiveness of captions in the pre-training data directly correlates with CLIP’s transfer performance. In fact, a CLIP model trained on 100K descriptive image-caption pairs from COCO attains performance comparable to one trained on 2x and 5x more samples from CC and YFCC respectively. To further corroborate our hypothesis, we train CLIP CC and YFCC with “more descriptive” captions by re-captioning the images using BLIP (Li et al., 2022). Indeed, we find that CLIP trained on 100K CC/YFCC samples with BLIP captions no longer performs worse than its COCO counterpart (see Figure 2(right)). This indicates that CLIP’s sensitivity to the pre-training corpus is not just an artifact of differing image distributions, but due to the presence (or absence) of descriptive captions. 3.3 THE EFFECT OF INTRA-DATASET VARIATIONS IN CAPTIONS Image captions (Figure 1) seem to vary in how they describe an object (e.g., “duffel van” or “car”) and the parts of the image they focus on (e.g., discussing the “street” or “brick”). We now study how these lexical and focus variations in captions CLIP’s ability to learn meaningful representations. A simple setting. As a starting point, we investigate this effect on the COCO dataset using synthetic captions—constructed using the available multi-object labels—whereby we can precisely control the intra-dataset captions variations. In an attempt to simulate the variations we observe in Figure 1, we design the captions to (not) be: (i) consistent: use a fixed term or random synonyms to describe an object across the dataset (lexical variations); and (ii) complete: mention all or a random subset of image objects (focus variations). (See Appendix A.6 for details and Appendix Figure 8 for examples.) Surprisingly, we find that a CLIP model trained with complete and consistent synthetic COCO captions outperforms a model trained on human-written captions (cf. row 1 in Figure 4(left) to row 3 in Table 1). However, dropping these two conditions causes the transfer performance of the model to drop significantly (cf. rows 1, 2, and 4 in Figure 4(left)). These findings suggest that variability in dataset captions can have an adverse effect on the resulting CLIP models. The effect of stochasticity. We now revisit our stochastic CLIP variant, CLIPS, in this simple setting. Intuitively, we might expect that sampling from a set of diverse captions per image—which cover possibile lexical and stylistic variations—during training might alleviate the adverse effects of caption variability. Indeed, we find that for synthetic COCO captions, CLIPS is not as affected by caption inconsistency and/or incompleteness. The ∼2% improvement of CLIPS over CLIP here mirrors the 3.6% gain seen for human-provided captions (cf. Table 1). These findings suggest that one of the reasons why stochasticity significantly boosts CLIP’s performance is its role in (caption) variance reduction. We also find that CLIPS transfers 2% better when trained on human-provided captions as opposed to synthetic ones (unlike CLIP). This indicates that human-written captions do contain useful information that is not present in object labels alone. However, extracting this signal is not straightforward, and may require incorporating multiple captions into CLIP training. Datasets in practice. We now attempt to characterize caption variability in real-world datasets. Inspired by prior work in natural language processing and linguistics (Appendix A.8), for a set of dataset captions, we measure: (i) the total number of unique n-grams (N=1-3) (Li et al., 2015; Fung et al., 2020) and (ii) measure of textual lexical diversity (MTLD) (McCarthy & Jarvis, 2010). Along both these axes of variability, we see that COCO < CC < YFCC (Figure 4(right)). Thus, aside from the lower descriptiveness of YFCC (and to a lesser extent CC) captions, their variability could be the reason why the resulting CLIP models have worse transfer performance (Figure 2(left)). This also explains why scale is essential to benefit from language supervision on CC and YFCC. After all, CLIP would need to be trained on more captions to even encounter the same words twice. How many captions are enough? We saw above that “text data augmentations” via CLIPS could reduce the adverse impacts of caption variability. We now analyze how this effect scales with the number of available captions per image on the CC and YFCC datasets. Here, we use the BLIP captioning model to generate multiple captions per image via nucleus sampling (Holtzman et al., 2020). This procedure is intended to serve as a proxy for the manual caption annotation or automated scraping procedures that might be used for data collection in practice. We observe in Figure 5(left), that CLIPS improves as the number of available captions per image increases (plateauing around 10). However, scaling up the overall number of image-caption pairs appears to be far more effective than incorporating more captions per image (at least those obtained via BLIP) from the perspective of improving transfer performance (see Figure 5(right)). Note that the exact trade-offs and their costs are context dependent and vary based on the exact procedure used for caption collection. 4 MAKING EXISTING CAPTIONS WORK So far, we identified three properties of the pre-training data that influence CLIP’s transfer performance: (i) scale, (ii) caption descriptiveness, and (ii) caption variability. Our analysis shows that one way to improve CLIP’s performance, especially on uncurated data sources, is to simply pre-train with more data. Alternatively, for a fixed data scale, we may be able to obtain better CLIP models if we improve what captions describe and how they describe an image. We now focus on the latter and put forth simple dataset interventions to improve transfer capabilities of CLIP-style models. Data pre-processing: Given the importance of caption descriptiveness, we might consider preprocessing scraped data to select for samples with this property. The CC data collection procedure (Sharma et al., 2018) partially demonstrates the effectiveness of this approach, as pre-training CLIP on CC samples leads to better transfer performance than a comparable number of “raw” YFCC ones. However, due to its reliance on the Google Vision API, this procedure can be quite expensive, with costs scaling with the size of the scraped data. Recent works have taken a different approach, using pre-trained image-language models (like CLIP) to filter data (Schuhmann et al., 2021). However, since we are interested in building such models in the first place, we avoid taking this route. Instead, we focus on understanding how far we can get by simply discarding low quality captions, agnostic to the images. We take inspiration from the filtering pipelines used to build large language models (Brown et al., 2020). Here, raw Internet data is cleaned by selecting samples that are “similar” to known high-quality datasets (e.g., Wikipedia). Taking a similar approach, we train a linear classifier on a bag-of-n-grams sentence embeddings (Joulin et al., 2017) to distinguish validation set CC/YFCC captions from COCO ones. This classifier is then used to filter CC/YFCC, only retaining samples that are predicted as being COCO-like. This simple procedure does end up selecting for captions that are more focused on objects and their descriptions, as opposed to describing contextual properties such as dates or urls—see Appendix A.9. For a given pre-training data budget, we see moderate gains (∼ 2%) from using this heuristic to filter datasets—see Table 2 (left). Mitigating caption variability: As we saw in Section 3.3, models trained with CLIPS are less impacted by caption variability. However, typical image-captioning datasets (such as CC and YFCC) only have one caption per image. We thus devise a methodology to augment these captions by leveraging recent open-source large language models (Wang & Komatsuzaki, 2021). Concretely, we provide GPT-J with 4 (caption, paraphrase) pairs as in-context (Brown et al., 2020) examples. We then prompt it to paraphrase a given target caption. By sampling from GPT-J, we can obtain multiple (in our case, five) paraphrases for every such caption (examples in Appendix Figure 10). In Table 2 (right), we see that feeding these captions into CLIPS results in a considerable performance boost over CLIP (trained with a single caption/image). For instance, for COCO, CLIPS trained on our generated captions bridges more than half of the performance gap between vanilla CLIP and CLIPS trained with five human-provided captions. 5 RELATED WORK Representation learning. Building models with general representations that transfer to downstream tasks has been a long-standing goal in ML (Donahue et al., 2014; Razavian et al., 2014; Chatfield et al., 2014; Agrawal et al., 2014; Yosinski et al., 2014). Our work is in line with prior studies aimed at characterizing the effect of design choices made during training (Azizpour et al., 2015; Huh et al., 2016; Chu et al., 2016; Kornblith et al., 2019; Zhai et al., 2019; Locatello et al., 2020), e.g. model architecture, datasets and loss functions, on learned representations. The utility of language in vision. There is a long line of work on leveraging language to improve vision models (Quattoni et al., 2007; Srivastava & Salakhutdinov, 2012; Frome et al., 2013; Baltrušaitis et al., 2018; Guo et al., 2019). Recent studies have sought to investigate how integral language is to the performance of such multi-modal models. Fang et al. (2022) study a different property of CLIP—zero-shot robustness, rather than transfer learning—and show that it is comparable to that of a supervised classifier trained on the same YFCC images. Therefore, they conclude that data distribution is more important than language supervision. In concurrent work, (Nguyen et al., 2022) study the sensitivity of CLIP’s zero-shot robustness to the pre-training dataset. However, unlike our work, they do not: (i) contrast CLIP against image-only methods trained on the same corpora, and (ii) attempt to explain what properties of the data are responsible for CLIP’s sensitivity. Ruan et al. (2022) argue theoretically that the robustness of linear probes on CLIP’s representations stems from pretraining with a large and diverse set of images and domain-agnostic augmentations T (x). Most similar to our work are the studies by (Desai & Johnson, 2021) and Devillers et al. (2021), which study the role of language supervision on transfer performance in the context of VirTex (a CLIP precursor) and CLIP respectively. Notably, the two works draw opposite conclusions as to the utility of language compared to purely image-based approaches. This difference stems from the fact that neither of the works attempt to directly control for algorithmic, architectural, and data-related confounders. Our work performs a substantially more controlled study on the effect of language supervision, allowing us to make more direct claims than these works. 6 DISCUSSION Our work takes a step towards resolving the debate as to whether multi-modality, and language in particular, can improve visual representation learning. A comparison of CLIP with a matched imageonly SimCLR model reveals that neither form of supervision (using images alone or coupled with language) is strictly better than the other. Indeed, there are practical regimes where CLIP’s performance cannot be matched using SimCLR with any amount of image data and others where language supervision is harmful. This is a direct consequence of CLIP’s sensitivity to its pre-training data, especially its scale, descriptiveness, and variability of the captions. Through our analysis, we also discovered algorithmic improvements (CLIPS) and dataset modifications (filtering and augmenting captions) to better take advantage of language supervision. Limitations. Our exploration allows us to quantify the utility of language supervision (over using images alone) in a specific setting: transfer learning via probing on certain object recognition tasks (Kornblith et al., 2019). We view expanding the scope of our analysis as a direction for future work. Further, despite the significant steps we took to control the differences between CLIP and SimCLR, there are still some inconsistencies that have not been accounted for (discussed in Section 2). Nevertheless, the differences between our and previous results (e.g, Desai & Johnson, 2021; Devillers et al., 2021) suggest that we successfully pinned down some crucial confounders (architecture, augmentations, stochasticity, datasets, hyperparameters). ETHICS STATEMENT Below, we discuss certain ethical concerns pertaining to our work: • Although we rely on existing open source vision/multi-modal datasets for our analysis, prior work has raised concerns about some of these (or other similarly-sourced ones) being biased (Stock & Cisse, 2017; Yang et al., 2020; Birhane et al., 2021; Paullada et al., 2021) and violating privacy (Prabhu & Birhane, 2020; Yang et al., 2022). • Our focus is on understanding the extent to which CLIP’s representations are influenced by what the captions they are trained on describe. However, we sidestep whether or not this is always desirable. After all, recent studies (Birhane et al., 2021) show that vision-linguistic datasets have various biases and stereotypes, which we might not want our models to learn. • In Section 4, we use large language models (in particular, GPT-J) to augment dataset captions via in-context learning. These models however are known to have their own limitations that might percolate into the generated captions. REPRODUCIBILITY STATEMENT Datasets: All the pre-training/transfer learning datasets we use are open-source. In the supplementary material, we include certain variants of the COCO/CC/YFCC datasets we created as CSV files: namely synthetic COCO captions, filtered CC/YFCC samples, and GPT-J paraphrased captions. Code and hyperparameters: We discuss implementation details including hyperparameter settings in Appendix A. We also include the code for training models in the supplementary material. ACKNOWLEDGEMENTS We are grateful to Niladri Chatterji, Elisa Kreiss, Nimit Sohoni and Dimitris Tsipras for helpful discussions. SS is supported by Open Philanthropy, YD by a Knights-Hennessy Scholarship, and RT by the NSF GRFP under Grant No. DGE 1656518. We also thank Stanford HAI for a Google Cloud credits grant.
1. What is the main contribution of the paper regarding representational transfer between unsupervised image-only models and unsupervised vision and language models? 2. What are the strengths and weaknesses of the paper's findings and claims, particularly regarding its comparison between single modality and multimodal models? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially regarding its empirical rigor and lack of citations from relevant research areas? 4. Do you have any questions or concerns about the paper's methodology, experimental settings, and results, such as the choice of training data, architectures, transformations, and evaluation measures?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper presents a controlled empirical study that compares the capability of representational transfer between unsupervised image-only model (mostly SimCLR) and unsupervised vision and language models (mostly CLIP). The central question that the paper tries to answer is whether unsupervised vision-and-language models that exploit “language information” are richer than image only models. Controls in the paper are related to experimental settings such as training data, training architectures, transformations. The primary observations emphasise that vision-and-language models are richer with caveats on the type of vision and language parallel data and scale. Strengths And Weaknesses Strength: The paper contains an interesting exposition between single modality model v/s multimodal model and the results confirm that multimodal models are indeed better at representational transfer (in general). Weakness: The claim in the paper about “transfer of representations” seems general (and tad too strong) however this is only evaluated over vision heavy transfer learning benchmarks. It is not clear if the behaviour would be similar for transfer learning on vision and language related benchmarks. The salient observations (relating to language - especially descriptiveness and variability) in the paper are perhaps fairly trivial considering the vast amount of similar work from the area of vision and language and semiotics [1, 2, 3, 4, 5 inter alia]. In general, the paper lacks empirical rigour, the paper contains a list of experimental interventions, but none are clear (expanding as the next point). [1] Automatic description generation from images: A survey of models, datasets, and evaluation measures. Bernardi et al. 2016 [2] On the use of human reference data for evaluating automatic image descriptions. van Miltenburg. 2020 [3] Ways of seeing. Berger. 2008 [4] Semiotics: the basics. Chandler. 2007 [5] Underspecification in Scene Description-to-Depiction Tasks. Hutchinson et al. 2022 Clarity, Quality, Novelty And Reproducibility Section 3.2 lacks relevant citations from language and vision research where similar observations have been repeatedly made. Flickr captions are varied (I am also unable to relate the section to Grice, 1975, perhaps it should be to [6]). The experimental settings and details in the paper are insufficient - such as: Section 3.3, it is not clear how the captions are designed and further: How should one quantify consistency? How should one measure completeness - is this based on ground truth COCO objects for a particular image? Without these details, it just seems arbitrary. Section 3.3 on how many captions are enough - what is “(high-quality)” captions - are the captions always factual? Nucleus sampling doesn’t always generate factual captions. Perhaps the observations due to plateauing is due to irrelevant captions? [6] The construction of social reality. Searl 1995
ICLR
Title Is a Caption Worth a Thousand Images? A Study on Representation Learning Abstract The development of CLIP (Radford et al., 2021) has sparked a debate on whether adding language supervision can yield vision models with more transferable representations than traditional image-only methods. Our work studies this question through a carefully controlled comparison of two approaches, in terms of their ability to learn representations that generalize to downstream classification tasks. We find that when the pre-training data meets certain criteria—it is sufficiently large and contains descriptive captions with low variability—-image-only methods do not match CLIP’s performance even when they are trained with more image data. However, contrary to what one might expect, there are practical settings in which these criteria are not met, wherein added supervision through captions is actually detrimental. Motivated by our findings, we devise simple data and algorithmic interventions to improve the transfer performance of CLIP-style models. 1 INTRODUCTION Image-based contrastive learning approaches have shown promise in building models that generalize beyond the data distributions they are trained on (Wu et al., 2018; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Chen et al., 2020b; Caron et al., 2021). By leveraging large (unlabelled) data sources via self-supervised training, these models learn representations that transfer to diverse image classification tasks—more so than their supervised counterparts (Ericsson et al., 2021). Recently, Radford et al. (2021) showed that a different approach—contrastive learning with language supervision—can yield models (CLIP) with remarkable transfer capabilities. This development has garnered significant interest in the vision and natural language processing communities alike, leading to a debate on the utility of multi-modality in visual representation learning (Zhai et al., 2022; Devillers et al., 2021; Fang et al., 2022). Our work focuses on a specific question within this debate: Does added language supervision lead to more transferable visual representations than using images alone? It might seem like the answer to this question is obvious. After all, CLIP utilized caption information unavailable to traditional image-based approaches and showed substantial gains over them (Radford et al., 2021). However, CLIP is drastically different from these approaches in many ways, from training data to fine-grained implementation choices, which makes it difficult to isolate the contribution of language supervision (see Section 5). Further, recent studies on CLIP’s zero-shot classification and robustness properties cast doubt on whether adding language supervision is always beneficial (Fang et al., 2022). Resolving the aforementioned debate thus requires a carefully controlled comparison of the two approaches in which the only difference is the form of supervision. Our contributions. We devise a methodology to assess the utility of language supervision in CLIP1 from a visual representation learning standpoint. To do so, we recognize that CLIP pretraining and popular image-based methods share the same underlying primitive of contrastive learning. Specifically, Radford et al. (2021)’s approach is strikingly similar to SimCLR (Chen et al., 2020a). The only irreducible difference between them is whether supervision is provided to the 1We use CLIP to refer to models trained with Radford et al. (2021)’s approach, not their pre-trained model. model via image augmentations or image-caption matching (see Figure 1)—which is precisely the quantity we want to study. Thus, we can disentangle the effect of language supervision on visual representations by comparing matched versions of SimCLR and CLIP (trained from scratch). Our focus, in particular, is on how well the learned representations transfer to varied image classification tasks. We find that the picture is nuanced and depends on three properties of the pre-training data: 1. When the scale of the dataset is sufficiently large, CLIP’s visual representations indeed transfer better than their matched image-only SimCLR counterparts. In fact, this gap is not bridged by training SimCLR with more (image) data, suggesting that a caption can be worth more than any number of images. However, in the low-data regime, language supervision actually hurts model performance both in and out-of-distribution. 2. The descriptiveness (Kreiss et al., 2021) of captions—i.e., the extent to which they refer to what is contained in an image—directly determines how well CLIP models transfer. In fact, we find that a single descriptive image-caption pair (e.g., from COCO (Lin et al., 2014)) is worth five less descriptive, uncurated captions (e.g., from YFCC (Thomee et al., 2016)). 3. The variability of captions (e.g. stylistic or lexical) within a dataset can impair CLIP’s performance. We find that a modification to standard CLIP training—performing text augmentations by sampling from a pool of captions for each image—can alleviate this drop. These properties have inter-twined effects on CLIP’s performance: e.g., dataset scale can, to some extent, compensate for less-descriptive and/or varied captions. Guided by our findings, we devise simple datasets interventions that can lead to more-transferrable CLIP models: (i) filtering out lowquality captions with a text-based classifier, and (ii) applying data augmentation to captions by paraphrasing them using pre-trained language models. 2 AN APPLES-TO-APPLES COMPARISON Prior works have studied image-only and image-language pre-training methods in isolation (Wu et al., 2018; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Chen et al., 2020b;b; Chen & He, 2021; Caron et al., 2021; Radford et al., 2021) and side-by-side (Desai & Johnson, 2021; Devillers et al., 2021; Fang et al., 2022). Yet, they provide incomplete (and often contradictory) answers to our motivating question of the value of language supervision relative to using images alone (Section 5). Crucially, this is due to various confounders such as: (i) bespoke algorithmic optimizations within the two methods, and (ii) differing pre-training datasets. In this section, we outline a series of steps that we take to mitigate these confounders and compare the two methods on equal footing. 2.1 FINDING COMMON GROUND Our approach for studying the value of language supervision is guided by the following insight: CLIP pre-training is strikingly similar to the popular image-only SimCLR method (Chen et al., 2020a)2. Both methods rely on the same algorithmic primitive of contrastive learning, which we illustrate in Figure 1. Specifically, the (CLIP/SimCLR) model is trained cross-entropy based objective, which for a given pair (x, x+) of positive examples with with associated negatives N is: ℓ = − log exp(sim(z, z+)/τ)∑ n∈N∪{z+} exp(sim(z, zn)/τ) , where z = g(ϕ(x)) and z+/n = g′(ϕ′(x+/n)), (1) sim is cosine similarity, ϕ/ϕ′ are encoders, and g/g′ are projection heads. Positive examples x+ are obtained through a transformation of the image x, i.e., x+ ∼ T (x)—such as image augmentations (e.g., rotations or crops) in SimCLR and captions in CLIP. Observe that this difference in T (·) between CLIP and SimCLR corresponds exactly to whether the model is trained with language, which is the quantity we want to study. Thus, to isolate the role of added language supervision, we can compare the downstream performance of matched CLIP and SimCLR models. To this end, we must take some steps to make their implementatations consistent: • Datasets: Typically, CLIP and SimCLR are trained on different datasets, as the former requires image-caption pairs, while the latter can leverage any image data. To control for the effect of the data distribution, we pre-train both models from scratch on the same data. • Architecture: We use the ResNet-50 (He et al., 2016) architecture as the image encoder for both methods, and a Transformer (Vaswani et al., 2017) as the text encoder in CLIP. We also extensively tune hyperparameters for both methods (Appendix A.3). • Augmentations: Both methods apply data augmentations to the image x itself at each training step. However, the augmentations used in SimCLR (resize, crop, flip, jitter, blur, grayscale) are far more sophisticated than those in CLIP (resize and crop). We remove this confounder by using SimCLR augmentations unless otherwise specified. • Transformation stochasticity: The two methods differ in how they obtain x+, not just due to the choice of T (x) but also the generative process itself. In SimCLR , x+ is a new random draw from T (x) in every batch, while for CLIP, it is a single fixed caption. Perfectly matching them requires training CLIP by sampling a fresh caption x+ for each image at each iteration. We will refer to this stochastic version of CLIP as CLIPS. Mismatches. Despite our efforts to match CLIP with SimCLR, some inconsistencies remain– partly due to their differing input modalities. In particular, CLIP (and CLIPS): (i) Processes T (x) using a text transformer rather than SimCLR’s ResNet-50. (ii) Does not share weights between the encoders processing x and T (x) because they corre- spond to different modalities, unlike SimCLR. (iii) Uses a linear projection head g/g′ instead of SimCLR’s MLP, which we allow as Radford et al. (2021) showed that this choice does not affect CLIP’s performance. (iv) Only uses other examples in the batch from the same modality as negatives. Thus CLIP has half the number of negatives compared to SimCLR, which also uses transformed versions of other examples in the batch (i.e. both x̂ and x̂+) as negatives. We now assess how the representations learned by our matched CLIP and SimCLR models compare. In particular, we measure how well their representations transfer to the downstream tasks from Kornblith et al. (2019). Akin to (Radford et al., 2021), we focus on the fixed-feature setting, where we freeze the weights of a given model and then train a linear probe using task data (see Appendix A). 2.2 A CASE STUDY We begin by comparing CLIP and SimCLR models trained on the MS-COCO dataset (Lin et al., 2014) (henceforth referred to as COCO), which contains ∼120K images with multi-object labels. Each image has five human-provided captions, collected post-hoc by Chen et al. (2015) using Mechanical Turk. Annotators were given detailed instructions on how to caption an image such as to describe only the important parts of the image and not to use proper names. We use COCO as our 2Other image-based methods (He et al., 2020; Chen et al., 2020b; Chen & He, 2021; Caron et al., 2021) have optimizations that are not present in CLIP. starting point for two reasons. First, we can assess the utility of language supervision in the ideal setting where the captions are of fairly high quality due to the careful curation process. Second, we can approximate CLIPS3 by sampling from the available set of five captions per image. Captions (often) help on COCO. In Table 1, we compare various COCO pre-trained models (supervised, SimCLR, CLIP/CLIPS) in terms of the accuracy of a linear probe on: (i) COCO classification (in distribution), and (ii) transfer tasks. Note that to contrast image-only and image-language supervision, the “right” comparison is between SimCLR and CLIPS: they are matched (to the best of our abilities) in terms of dataset, architecture, augmentations and stochasticity. We find that: 3 THE IMPACT OF PRE-TRAINING DATA Our analysis of COCO shows that language supervision can be beneficial over using images alone. That being said, the datasets that CLIP is typically trained on differ, both in scale and quality, from COCO. For instance, COCO captions were collected post-hoc under controlled settings, which is markedly different from the automatic scraping procedure used to gather data at scale. Thus, we shift our focus to two frequently-used (Ilharco et al., 2021) CLIP training datasets: ConceptualCaptions (Sharma et al., 2018) (CC) contains ∼3.3M images harvested from web, with their ALT-text attributes as captions. The data was filtered for text quality—e.g., well-formed captions that mention at least one object found via the Google Cloud Vision API. Furthermore, all proper nouns in the captions were hypernymized (e.g., ”Justin Timberlake” becomes ”pop artist”). Yahoo Flickr Creative Commons (Thomee et al., 2016) (YFCC): This dataset has ∼ 99.2M images from Flickr, along with their posted titles as captions with no post-processing. 3We henceforth overload notation and use CLIPS to denote: (i) the idealized stochastic version of CLIP, which samples from infinite captions per image, and (ii) our approximation of it with a finite set of captions. Do captions still help? We start by comparing the transfer performance of CLIP and SimCLR on 100K subsets of COCO/CC/YFCC in Figure 2(left). We observe that SimCLR’s transfer capabilities do not vary much across pre-training datasets, while CLIP’s performance is highly sensitive to them. With 100K samples from CC/YFCC, using CLIP is worse than image-only pre-training via SimCLR—unlike what we see for COCO. The sensitivity of CLIP to pre-training data. Inspecting dataset samples (Figure 3) yields a possible explanation for this sensitivity. The three datasets differ not just in scale and image diversity, but also the extent to which captions: (i) describe visually salient aspects of the image, and (ii) vary across images (e.g., in style and wording). For instance, COCO captions are homogenous and descriptive, while YFCC ones vary and are often complementary to the image. We now study the effect these dataset properties—scale, descriptiveness, and variability—have on CLIP’s performance. 3.1 SCALE MATTERS A major appeal of contrastive learning methods is that they can leverage the vast amounts of unlabeled data available on the Internet. Thus, it is natural to ask how different forms of contrastive supervision benefit from added pre-training data. We may expect image-only methods to perform worse for smaller datasets as they are less likely to encounter (augmented) images which are similar. We might further expect image-language models to perform more favorably in this setting since they receive richer supervision. To test whether this is the case, we compare CLIP and SimCLR models trained on datasets of varying sizes: 10-100K samples for COCO, and 100K-2M for CC/YFCC. Our results in Figure 2(left) deviate from our earlier expectations. First, beyond a certain point, SimCLR’s transfer performance improves only marginally with additional data. While surprising, similar effects have been noted previously (Tian et al., 2021; Cole et al., 2022), especially when the data is uncurated (e.g., YFCC) (Tian et al., 2021). Second, in the low-data regime (<50K/200K/500K for COCO/CC/YFCC), training with language actually hurts the models’ transfer performance. In fact, (data) scale seems to be essential to benefit from language supervision. With sufficient data, CLIP outperforms SimCLR on all three datasets. This gap remains even if we train SimCLR with extra data, indicating that captions can be worth more than any number of images. 3.2 THE IMPORTANCE OF DESCRIPTIVE CAPTIONS Prior work in linguistics and accessibility has drawn a distinction between image “descriptions” and “captions” (Berger & Dibb, 2003; Chandler, 2007; Hodosh et al., 2013; Bernardi et al., 2016; van Miltenburg, 2020; Kreiss et al., 2021; Dognin et al., 2022; Hutchinson et al., 2022). In particular, Bernardi et al. (2016) define descriptions as texts that “verbalize what can be seen in the image, i.e., they refer to the objects, actions, and attributes depicted, mention the scene type, etc.”. In contrast, Panofsky (1939) suggest that a typical caption “provides personal, cultural, or historical context for the image.” This line of work suggests that COCO captions are more descriptive due to the decontextualization of the image and strict instructions provided to the annotators during the caption generation process (Kreiss et al., 2021). In contrast, Flickr captions (e.g., in CC/YFCC) tend to contain information that is complementary to the image Alikhani et al. (2020) since people tend not to not restate what can already be observed in the photographs they post (Hodosh et al., 2013). Now to perform well on downstream classification tasks, we ideally want model representations that encode salient image objects. Recall that in contrastively-trained models, the learned representations are determined by the transformation T (x) (captions for CLIP). This suggests a hypothesis: pretraining CLIP with descriptive captions will yield more transferrable (vision) representations. To test this, we need to quantify the descriptiveness of a caption. Since doing so precisely is infeasible, we approximate descriptiveness using a pre-trained caption-scoring model. Specifically, we leverage the BLIP model (Li et al., 2022) which has shown state-of-the-art performance on image-based text retrieval. We then measure the average score assigned by BLIP to dataset captions matching their corresponding images—see Figure 2(right). As expected based on our earlier subjective assessment as well as prior work (Hodosh et al., 2013; Kreiss et al., 2021), we indeed find that the caption descriptiveness of COCO > CC > YFCC (see Appendix B.1 for a discussion of relevant vs. noisy captions.) Furthermore, we see that the descriptiveness of captions in the pre-training data directly correlates with CLIP’s transfer performance. In fact, a CLIP model trained on 100K descriptive image-caption pairs from COCO attains performance comparable to one trained on 2x and 5x more samples from CC and YFCC respectively. To further corroborate our hypothesis, we train CLIP CC and YFCC with “more descriptive” captions by re-captioning the images using BLIP (Li et al., 2022). Indeed, we find that CLIP trained on 100K CC/YFCC samples with BLIP captions no longer performs worse than its COCO counterpart (see Figure 2(right)). This indicates that CLIP’s sensitivity to the pre-training corpus is not just an artifact of differing image distributions, but due to the presence (or absence) of descriptive captions. 3.3 THE EFFECT OF INTRA-DATASET VARIATIONS IN CAPTIONS Image captions (Figure 1) seem to vary in how they describe an object (e.g., “duffel van” or “car”) and the parts of the image they focus on (e.g., discussing the “street” or “brick”). We now study how these lexical and focus variations in captions CLIP’s ability to learn meaningful representations. A simple setting. As a starting point, we investigate this effect on the COCO dataset using synthetic captions—constructed using the available multi-object labels—whereby we can precisely control the intra-dataset captions variations. In an attempt to simulate the variations we observe in Figure 1, we design the captions to (not) be: (i) consistent: use a fixed term or random synonyms to describe an object across the dataset (lexical variations); and (ii) complete: mention all or a random subset of image objects (focus variations). (See Appendix A.6 for details and Appendix Figure 8 for examples.) Surprisingly, we find that a CLIP model trained with complete and consistent synthetic COCO captions outperforms a model trained on human-written captions (cf. row 1 in Figure 4(left) to row 3 in Table 1). However, dropping these two conditions causes the transfer performance of the model to drop significantly (cf. rows 1, 2, and 4 in Figure 4(left)). These findings suggest that variability in dataset captions can have an adverse effect on the resulting CLIP models. The effect of stochasticity. We now revisit our stochastic CLIP variant, CLIPS, in this simple setting. Intuitively, we might expect that sampling from a set of diverse captions per image—which cover possibile lexical and stylistic variations—during training might alleviate the adverse effects of caption variability. Indeed, we find that for synthetic COCO captions, CLIPS is not as affected by caption inconsistency and/or incompleteness. The ∼2% improvement of CLIPS over CLIP here mirrors the 3.6% gain seen for human-provided captions (cf. Table 1). These findings suggest that one of the reasons why stochasticity significantly boosts CLIP’s performance is its role in (caption) variance reduction. We also find that CLIPS transfers 2% better when trained on human-provided captions as opposed to synthetic ones (unlike CLIP). This indicates that human-written captions do contain useful information that is not present in object labels alone. However, extracting this signal is not straightforward, and may require incorporating multiple captions into CLIP training. Datasets in practice. We now attempt to characterize caption variability in real-world datasets. Inspired by prior work in natural language processing and linguistics (Appendix A.8), for a set of dataset captions, we measure: (i) the total number of unique n-grams (N=1-3) (Li et al., 2015; Fung et al., 2020) and (ii) measure of textual lexical diversity (MTLD) (McCarthy & Jarvis, 2010). Along both these axes of variability, we see that COCO < CC < YFCC (Figure 4(right)). Thus, aside from the lower descriptiveness of YFCC (and to a lesser extent CC) captions, their variability could be the reason why the resulting CLIP models have worse transfer performance (Figure 2(left)). This also explains why scale is essential to benefit from language supervision on CC and YFCC. After all, CLIP would need to be trained on more captions to even encounter the same words twice. How many captions are enough? We saw above that “text data augmentations” via CLIPS could reduce the adverse impacts of caption variability. We now analyze how this effect scales with the number of available captions per image on the CC and YFCC datasets. Here, we use the BLIP captioning model to generate multiple captions per image via nucleus sampling (Holtzman et al., 2020). This procedure is intended to serve as a proxy for the manual caption annotation or automated scraping procedures that might be used for data collection in practice. We observe in Figure 5(left), that CLIPS improves as the number of available captions per image increases (plateauing around 10). However, scaling up the overall number of image-caption pairs appears to be far more effective than incorporating more captions per image (at least those obtained via BLIP) from the perspective of improving transfer performance (see Figure 5(right)). Note that the exact trade-offs and their costs are context dependent and vary based on the exact procedure used for caption collection. 4 MAKING EXISTING CAPTIONS WORK So far, we identified three properties of the pre-training data that influence CLIP’s transfer performance: (i) scale, (ii) caption descriptiveness, and (ii) caption variability. Our analysis shows that one way to improve CLIP’s performance, especially on uncurated data sources, is to simply pre-train with more data. Alternatively, for a fixed data scale, we may be able to obtain better CLIP models if we improve what captions describe and how they describe an image. We now focus on the latter and put forth simple dataset interventions to improve transfer capabilities of CLIP-style models. Data pre-processing: Given the importance of caption descriptiveness, we might consider preprocessing scraped data to select for samples with this property. The CC data collection procedure (Sharma et al., 2018) partially demonstrates the effectiveness of this approach, as pre-training CLIP on CC samples leads to better transfer performance than a comparable number of “raw” YFCC ones. However, due to its reliance on the Google Vision API, this procedure can be quite expensive, with costs scaling with the size of the scraped data. Recent works have taken a different approach, using pre-trained image-language models (like CLIP) to filter data (Schuhmann et al., 2021). However, since we are interested in building such models in the first place, we avoid taking this route. Instead, we focus on understanding how far we can get by simply discarding low quality captions, agnostic to the images. We take inspiration from the filtering pipelines used to build large language models (Brown et al., 2020). Here, raw Internet data is cleaned by selecting samples that are “similar” to known high-quality datasets (e.g., Wikipedia). Taking a similar approach, we train a linear classifier on a bag-of-n-grams sentence embeddings (Joulin et al., 2017) to distinguish validation set CC/YFCC captions from COCO ones. This classifier is then used to filter CC/YFCC, only retaining samples that are predicted as being COCO-like. This simple procedure does end up selecting for captions that are more focused on objects and their descriptions, as opposed to describing contextual properties such as dates or urls—see Appendix A.9. For a given pre-training data budget, we see moderate gains (∼ 2%) from using this heuristic to filter datasets—see Table 2 (left). Mitigating caption variability: As we saw in Section 3.3, models trained with CLIPS are less impacted by caption variability. However, typical image-captioning datasets (such as CC and YFCC) only have one caption per image. We thus devise a methodology to augment these captions by leveraging recent open-source large language models (Wang & Komatsuzaki, 2021). Concretely, we provide GPT-J with 4 (caption, paraphrase) pairs as in-context (Brown et al., 2020) examples. We then prompt it to paraphrase a given target caption. By sampling from GPT-J, we can obtain multiple (in our case, five) paraphrases for every such caption (examples in Appendix Figure 10). In Table 2 (right), we see that feeding these captions into CLIPS results in a considerable performance boost over CLIP (trained with a single caption/image). For instance, for COCO, CLIPS trained on our generated captions bridges more than half of the performance gap between vanilla CLIP and CLIPS trained with five human-provided captions. 5 RELATED WORK Representation learning. Building models with general representations that transfer to downstream tasks has been a long-standing goal in ML (Donahue et al., 2014; Razavian et al., 2014; Chatfield et al., 2014; Agrawal et al., 2014; Yosinski et al., 2014). Our work is in line with prior studies aimed at characterizing the effect of design choices made during training (Azizpour et al., 2015; Huh et al., 2016; Chu et al., 2016; Kornblith et al., 2019; Zhai et al., 2019; Locatello et al., 2020), e.g. model architecture, datasets and loss functions, on learned representations. The utility of language in vision. There is a long line of work on leveraging language to improve vision models (Quattoni et al., 2007; Srivastava & Salakhutdinov, 2012; Frome et al., 2013; Baltrušaitis et al., 2018; Guo et al., 2019). Recent studies have sought to investigate how integral language is to the performance of such multi-modal models. Fang et al. (2022) study a different property of CLIP—zero-shot robustness, rather than transfer learning—and show that it is comparable to that of a supervised classifier trained on the same YFCC images. Therefore, they conclude that data distribution is more important than language supervision. In concurrent work, (Nguyen et al., 2022) study the sensitivity of CLIP’s zero-shot robustness to the pre-training dataset. However, unlike our work, they do not: (i) contrast CLIP against image-only methods trained on the same corpora, and (ii) attempt to explain what properties of the data are responsible for CLIP’s sensitivity. Ruan et al. (2022) argue theoretically that the robustness of linear probes on CLIP’s representations stems from pretraining with a large and diverse set of images and domain-agnostic augmentations T (x). Most similar to our work are the studies by (Desai & Johnson, 2021) and Devillers et al. (2021), which study the role of language supervision on transfer performance in the context of VirTex (a CLIP precursor) and CLIP respectively. Notably, the two works draw opposite conclusions as to the utility of language compared to purely image-based approaches. This difference stems from the fact that neither of the works attempt to directly control for algorithmic, architectural, and data-related confounders. Our work performs a substantially more controlled study on the effect of language supervision, allowing us to make more direct claims than these works. 6 DISCUSSION Our work takes a step towards resolving the debate as to whether multi-modality, and language in particular, can improve visual representation learning. A comparison of CLIP with a matched imageonly SimCLR model reveals that neither form of supervision (using images alone or coupled with language) is strictly better than the other. Indeed, there are practical regimes where CLIP’s performance cannot be matched using SimCLR with any amount of image data and others where language supervision is harmful. This is a direct consequence of CLIP’s sensitivity to its pre-training data, especially its scale, descriptiveness, and variability of the captions. Through our analysis, we also discovered algorithmic improvements (CLIPS) and dataset modifications (filtering and augmenting captions) to better take advantage of language supervision. Limitations. Our exploration allows us to quantify the utility of language supervision (over using images alone) in a specific setting: transfer learning via probing on certain object recognition tasks (Kornblith et al., 2019). We view expanding the scope of our analysis as a direction for future work. Further, despite the significant steps we took to control the differences between CLIP and SimCLR, there are still some inconsistencies that have not been accounted for (discussed in Section 2). Nevertheless, the differences between our and previous results (e.g, Desai & Johnson, 2021; Devillers et al., 2021) suggest that we successfully pinned down some crucial confounders (architecture, augmentations, stochasticity, datasets, hyperparameters). ETHICS STATEMENT Below, we discuss certain ethical concerns pertaining to our work: • Although we rely on existing open source vision/multi-modal datasets for our analysis, prior work has raised concerns about some of these (or other similarly-sourced ones) being biased (Stock & Cisse, 2017; Yang et al., 2020; Birhane et al., 2021; Paullada et al., 2021) and violating privacy (Prabhu & Birhane, 2020; Yang et al., 2022). • Our focus is on understanding the extent to which CLIP’s representations are influenced by what the captions they are trained on describe. However, we sidestep whether or not this is always desirable. After all, recent studies (Birhane et al., 2021) show that vision-linguistic datasets have various biases and stereotypes, which we might not want our models to learn. • In Section 4, we use large language models (in particular, GPT-J) to augment dataset captions via in-context learning. These models however are known to have their own limitations that might percolate into the generated captions. REPRODUCIBILITY STATEMENT Datasets: All the pre-training/transfer learning datasets we use are open-source. In the supplementary material, we include certain variants of the COCO/CC/YFCC datasets we created as CSV files: namely synthetic COCO captions, filtered CC/YFCC samples, and GPT-J paraphrased captions. Code and hyperparameters: We discuss implementation details including hyperparameter settings in Appendix A. We also include the code for training models in the supplementary material. ACKNOWLEDGEMENTS We are grateful to Niladri Chatterji, Elisa Kreiss, Nimit Sohoni and Dimitris Tsipras for helpful discussions. SS is supported by Open Philanthropy, YD by a Knights-Hennessy Scholarship, and RT by the NSF GRFP under Grant No. DGE 1656518. We also thank Stanford HAI for a Google Cloud credits grant.
1. What is the main contribution of the paper regarding CLIP-like image representation learning? 2. What are the strengths and weaknesses of the proposed approach in comparison to prior works? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Can you identify any potential confounders that the authors did not consider when comparing image-text contrastive and image-only contrastive losses? 5. How do the findings of this paper inspire future research in visual representation learning with language supervision?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper performs a thorough investigation of CLIP-like image representation learning. First, they try to compare the CLIP-like image-language contrastive loss and SimCLR-like image-only contrastive loss in a setting where many confounders are controlled. They conclude that adding language supervision can enable the model to learn more transferable representations than their image-only self-supervision method. Then, they investigate the effect of data properties on CLIP by training CLIP on three different datasets. They find that language supervision can hurt the model transfer performance in low-data regimes but can be helpful in large-scale settings. Also, "descriptive" captions and low variability of captions can improve the model transfer performance. Finally, based on their findings, they propose to filter non-descriptive captions by training a classifier to detect if a caption is COCO-like and mitigate caption variability by using GPT-J and COCO captions to generate new captions for existing image-caption datasets. They demonstrate the effectiveness of their proposed approach to CLIP. Strengths And Weaknesses Strengths: The paper performs a relatively systematic study on how image-text contrastive loss is compared with an image-only contrastive loss and what properties of the training data can affect the performance of image-text contrastive loss. By drawing insights from their experiments, they can further demonstrate improvements. When comparing image-text contrastive and image-only contrastive losses, they do a good job of identifying potential confounders and carefully control them as much as they can, making their conclusions more convincing compared with several previous papers. Whether and to what extent language supervision can improve visual representation learning is an interesting and important topic. The conclusions of this paper can be helpful and inspire researchers in the future. Weaknesses: While image-text and image-only contrastive losses have some similarities, they are not mutually exclusive but can in fact be combined together [1]. Therefore, it would be better to see what are the fine-grained differences between the two paradigms and if combing them together can combine the best of them, instead of merely showing one is better than the other. They find that the "descriptiveness" of caption data can affect the model performance and define the term "descriptiveness" as the extent to which they refer to what is contained in an image, which is somewhat vague and it seems that "descriptiveness" is an antonym of the commonly used term "noise". They use a retrieval system to quantify the "descriptiveness" of a caption and find that COCO>CC>YFCC, which seems like they are just measuring the quality of the datasets. Because CC and YFCC are scraped from the web and the image-caption pairs are noisy and not well-aligned, it is normal and well-known that noisy data can lead to bad performance. A more precise definition and a more suitable quantification method are required so that the difference between their conclusion and the well-known fact that "noisy data can be harmful" is more clear. They choose to study the effect of caption variability on COCO, where the human-written captions are not quite diverse as noted in the paper. The Visual Genome dataset [2] can be used because it contains many more captions per image and each caption focuses on a specific part of its image, which can make their conclusions more convincing. [1] Mu, Norman, Alexander Kirillov, David Wagner, and Saining Xie. "Slip: Self-supervision meets language-image pre-training." ECCV 2022. [2] Krishna, R., Zhu, Y., Groth, O., Johnson, J., Hata, K., Kravitz, J., Chen, S., Kalantidis, Y., Li, L.J., Shamma, D.A. and Bernstein, M.S., 2017. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV. Clarity, Quality, Novelty And Reproducibility The paper is well-written and easy to follow. It is unclear which specific BLIP checkpoints they use for measuring the descriptiveness of captions and image captioning. The paper mainly presents some novel empirical findings and two practical ways of improving CLIP. The code has been submitted and the hyper-parameters have been well-documented.
ICLR
Title Is a Caption Worth a Thousand Images? A Study on Representation Learning Abstract The development of CLIP (Radford et al., 2021) has sparked a debate on whether adding language supervision can yield vision models with more transferable representations than traditional image-only methods. Our work studies this question through a carefully controlled comparison of two approaches, in terms of their ability to learn representations that generalize to downstream classification tasks. We find that when the pre-training data meets certain criteria—it is sufficiently large and contains descriptive captions with low variability—-image-only methods do not match CLIP’s performance even when they are trained with more image data. However, contrary to what one might expect, there are practical settings in which these criteria are not met, wherein added supervision through captions is actually detrimental. Motivated by our findings, we devise simple data and algorithmic interventions to improve the transfer performance of CLIP-style models. 1 INTRODUCTION Image-based contrastive learning approaches have shown promise in building models that generalize beyond the data distributions they are trained on (Wu et al., 2018; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Chen et al., 2020b; Caron et al., 2021). By leveraging large (unlabelled) data sources via self-supervised training, these models learn representations that transfer to diverse image classification tasks—more so than their supervised counterparts (Ericsson et al., 2021). Recently, Radford et al. (2021) showed that a different approach—contrastive learning with language supervision—can yield models (CLIP) with remarkable transfer capabilities. This development has garnered significant interest in the vision and natural language processing communities alike, leading to a debate on the utility of multi-modality in visual representation learning (Zhai et al., 2022; Devillers et al., 2021; Fang et al., 2022). Our work focuses on a specific question within this debate: Does added language supervision lead to more transferable visual representations than using images alone? It might seem like the answer to this question is obvious. After all, CLIP utilized caption information unavailable to traditional image-based approaches and showed substantial gains over them (Radford et al., 2021). However, CLIP is drastically different from these approaches in many ways, from training data to fine-grained implementation choices, which makes it difficult to isolate the contribution of language supervision (see Section 5). Further, recent studies on CLIP’s zero-shot classification and robustness properties cast doubt on whether adding language supervision is always beneficial (Fang et al., 2022). Resolving the aforementioned debate thus requires a carefully controlled comparison of the two approaches in which the only difference is the form of supervision. Our contributions. We devise a methodology to assess the utility of language supervision in CLIP1 from a visual representation learning standpoint. To do so, we recognize that CLIP pretraining and popular image-based methods share the same underlying primitive of contrastive learning. Specifically, Radford et al. (2021)’s approach is strikingly similar to SimCLR (Chen et al., 2020a). The only irreducible difference between them is whether supervision is provided to the 1We use CLIP to refer to models trained with Radford et al. (2021)’s approach, not their pre-trained model. model via image augmentations or image-caption matching (see Figure 1)—which is precisely the quantity we want to study. Thus, we can disentangle the effect of language supervision on visual representations by comparing matched versions of SimCLR and CLIP (trained from scratch). Our focus, in particular, is on how well the learned representations transfer to varied image classification tasks. We find that the picture is nuanced and depends on three properties of the pre-training data: 1. When the scale of the dataset is sufficiently large, CLIP’s visual representations indeed transfer better than their matched image-only SimCLR counterparts. In fact, this gap is not bridged by training SimCLR with more (image) data, suggesting that a caption can be worth more than any number of images. However, in the low-data regime, language supervision actually hurts model performance both in and out-of-distribution. 2. The descriptiveness (Kreiss et al., 2021) of captions—i.e., the extent to which they refer to what is contained in an image—directly determines how well CLIP models transfer. In fact, we find that a single descriptive image-caption pair (e.g., from COCO (Lin et al., 2014)) is worth five less descriptive, uncurated captions (e.g., from YFCC (Thomee et al., 2016)). 3. The variability of captions (e.g. stylistic or lexical) within a dataset can impair CLIP’s performance. We find that a modification to standard CLIP training—performing text augmentations by sampling from a pool of captions for each image—can alleviate this drop. These properties have inter-twined effects on CLIP’s performance: e.g., dataset scale can, to some extent, compensate for less-descriptive and/or varied captions. Guided by our findings, we devise simple datasets interventions that can lead to more-transferrable CLIP models: (i) filtering out lowquality captions with a text-based classifier, and (ii) applying data augmentation to captions by paraphrasing them using pre-trained language models. 2 AN APPLES-TO-APPLES COMPARISON Prior works have studied image-only and image-language pre-training methods in isolation (Wu et al., 2018; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Chen et al., 2020b;b; Chen & He, 2021; Caron et al., 2021; Radford et al., 2021) and side-by-side (Desai & Johnson, 2021; Devillers et al., 2021; Fang et al., 2022). Yet, they provide incomplete (and often contradictory) answers to our motivating question of the value of language supervision relative to using images alone (Section 5). Crucially, this is due to various confounders such as: (i) bespoke algorithmic optimizations within the two methods, and (ii) differing pre-training datasets. In this section, we outline a series of steps that we take to mitigate these confounders and compare the two methods on equal footing. 2.1 FINDING COMMON GROUND Our approach for studying the value of language supervision is guided by the following insight: CLIP pre-training is strikingly similar to the popular image-only SimCLR method (Chen et al., 2020a)2. Both methods rely on the same algorithmic primitive of contrastive learning, which we illustrate in Figure 1. Specifically, the (CLIP/SimCLR) model is trained cross-entropy based objective, which for a given pair (x, x+) of positive examples with with associated negatives N is: ℓ = − log exp(sim(z, z+)/τ)∑ n∈N∪{z+} exp(sim(z, zn)/τ) , where z = g(ϕ(x)) and z+/n = g′(ϕ′(x+/n)), (1) sim is cosine similarity, ϕ/ϕ′ are encoders, and g/g′ are projection heads. Positive examples x+ are obtained through a transformation of the image x, i.e., x+ ∼ T (x)—such as image augmentations (e.g., rotations or crops) in SimCLR and captions in CLIP. Observe that this difference in T (·) between CLIP and SimCLR corresponds exactly to whether the model is trained with language, which is the quantity we want to study. Thus, to isolate the role of added language supervision, we can compare the downstream performance of matched CLIP and SimCLR models. To this end, we must take some steps to make their implementatations consistent: • Datasets: Typically, CLIP and SimCLR are trained on different datasets, as the former requires image-caption pairs, while the latter can leverage any image data. To control for the effect of the data distribution, we pre-train both models from scratch on the same data. • Architecture: We use the ResNet-50 (He et al., 2016) architecture as the image encoder for both methods, and a Transformer (Vaswani et al., 2017) as the text encoder in CLIP. We also extensively tune hyperparameters for both methods (Appendix A.3). • Augmentations: Both methods apply data augmentations to the image x itself at each training step. However, the augmentations used in SimCLR (resize, crop, flip, jitter, blur, grayscale) are far more sophisticated than those in CLIP (resize and crop). We remove this confounder by using SimCLR augmentations unless otherwise specified. • Transformation stochasticity: The two methods differ in how they obtain x+, not just due to the choice of T (x) but also the generative process itself. In SimCLR , x+ is a new random draw from T (x) in every batch, while for CLIP, it is a single fixed caption. Perfectly matching them requires training CLIP by sampling a fresh caption x+ for each image at each iteration. We will refer to this stochastic version of CLIP as CLIPS. Mismatches. Despite our efforts to match CLIP with SimCLR, some inconsistencies remain– partly due to their differing input modalities. In particular, CLIP (and CLIPS): (i) Processes T (x) using a text transformer rather than SimCLR’s ResNet-50. (ii) Does not share weights between the encoders processing x and T (x) because they corre- spond to different modalities, unlike SimCLR. (iii) Uses a linear projection head g/g′ instead of SimCLR’s MLP, which we allow as Radford et al. (2021) showed that this choice does not affect CLIP’s performance. (iv) Only uses other examples in the batch from the same modality as negatives. Thus CLIP has half the number of negatives compared to SimCLR, which also uses transformed versions of other examples in the batch (i.e. both x̂ and x̂+) as negatives. We now assess how the representations learned by our matched CLIP and SimCLR models compare. In particular, we measure how well their representations transfer to the downstream tasks from Kornblith et al. (2019). Akin to (Radford et al., 2021), we focus on the fixed-feature setting, where we freeze the weights of a given model and then train a linear probe using task data (see Appendix A). 2.2 A CASE STUDY We begin by comparing CLIP and SimCLR models trained on the MS-COCO dataset (Lin et al., 2014) (henceforth referred to as COCO), which contains ∼120K images with multi-object labels. Each image has five human-provided captions, collected post-hoc by Chen et al. (2015) using Mechanical Turk. Annotators were given detailed instructions on how to caption an image such as to describe only the important parts of the image and not to use proper names. We use COCO as our 2Other image-based methods (He et al., 2020; Chen et al., 2020b; Chen & He, 2021; Caron et al., 2021) have optimizations that are not present in CLIP. starting point for two reasons. First, we can assess the utility of language supervision in the ideal setting where the captions are of fairly high quality due to the careful curation process. Second, we can approximate CLIPS3 by sampling from the available set of five captions per image. Captions (often) help on COCO. In Table 1, we compare various COCO pre-trained models (supervised, SimCLR, CLIP/CLIPS) in terms of the accuracy of a linear probe on: (i) COCO classification (in distribution), and (ii) transfer tasks. Note that to contrast image-only and image-language supervision, the “right” comparison is between SimCLR and CLIPS: they are matched (to the best of our abilities) in terms of dataset, architecture, augmentations and stochasticity. We find that: 3 THE IMPACT OF PRE-TRAINING DATA Our analysis of COCO shows that language supervision can be beneficial over using images alone. That being said, the datasets that CLIP is typically trained on differ, both in scale and quality, from COCO. For instance, COCO captions were collected post-hoc under controlled settings, which is markedly different from the automatic scraping procedure used to gather data at scale. Thus, we shift our focus to two frequently-used (Ilharco et al., 2021) CLIP training datasets: ConceptualCaptions (Sharma et al., 2018) (CC) contains ∼3.3M images harvested from web, with their ALT-text attributes as captions. The data was filtered for text quality—e.g., well-formed captions that mention at least one object found via the Google Cloud Vision API. Furthermore, all proper nouns in the captions were hypernymized (e.g., ”Justin Timberlake” becomes ”pop artist”). Yahoo Flickr Creative Commons (Thomee et al., 2016) (YFCC): This dataset has ∼ 99.2M images from Flickr, along with their posted titles as captions with no post-processing. 3We henceforth overload notation and use CLIPS to denote: (i) the idealized stochastic version of CLIP, which samples from infinite captions per image, and (ii) our approximation of it with a finite set of captions. Do captions still help? We start by comparing the transfer performance of CLIP and SimCLR on 100K subsets of COCO/CC/YFCC in Figure 2(left). We observe that SimCLR’s transfer capabilities do not vary much across pre-training datasets, while CLIP’s performance is highly sensitive to them. With 100K samples from CC/YFCC, using CLIP is worse than image-only pre-training via SimCLR—unlike what we see for COCO. The sensitivity of CLIP to pre-training data. Inspecting dataset samples (Figure 3) yields a possible explanation for this sensitivity. The three datasets differ not just in scale and image diversity, but also the extent to which captions: (i) describe visually salient aspects of the image, and (ii) vary across images (e.g., in style and wording). For instance, COCO captions are homogenous and descriptive, while YFCC ones vary and are often complementary to the image. We now study the effect these dataset properties—scale, descriptiveness, and variability—have on CLIP’s performance. 3.1 SCALE MATTERS A major appeal of contrastive learning methods is that they can leverage the vast amounts of unlabeled data available on the Internet. Thus, it is natural to ask how different forms of contrastive supervision benefit from added pre-training data. We may expect image-only methods to perform worse for smaller datasets as they are less likely to encounter (augmented) images which are similar. We might further expect image-language models to perform more favorably in this setting since they receive richer supervision. To test whether this is the case, we compare CLIP and SimCLR models trained on datasets of varying sizes: 10-100K samples for COCO, and 100K-2M for CC/YFCC. Our results in Figure 2(left) deviate from our earlier expectations. First, beyond a certain point, SimCLR’s transfer performance improves only marginally with additional data. While surprising, similar effects have been noted previously (Tian et al., 2021; Cole et al., 2022), especially when the data is uncurated (e.g., YFCC) (Tian et al., 2021). Second, in the low-data regime (<50K/200K/500K for COCO/CC/YFCC), training with language actually hurts the models’ transfer performance. In fact, (data) scale seems to be essential to benefit from language supervision. With sufficient data, CLIP outperforms SimCLR on all three datasets. This gap remains even if we train SimCLR with extra data, indicating that captions can be worth more than any number of images. 3.2 THE IMPORTANCE OF DESCRIPTIVE CAPTIONS Prior work in linguistics and accessibility has drawn a distinction between image “descriptions” and “captions” (Berger & Dibb, 2003; Chandler, 2007; Hodosh et al., 2013; Bernardi et al., 2016; van Miltenburg, 2020; Kreiss et al., 2021; Dognin et al., 2022; Hutchinson et al., 2022). In particular, Bernardi et al. (2016) define descriptions as texts that “verbalize what can be seen in the image, i.e., they refer to the objects, actions, and attributes depicted, mention the scene type, etc.”. In contrast, Panofsky (1939) suggest that a typical caption “provides personal, cultural, or historical context for the image.” This line of work suggests that COCO captions are more descriptive due to the decontextualization of the image and strict instructions provided to the annotators during the caption generation process (Kreiss et al., 2021). In contrast, Flickr captions (e.g., in CC/YFCC) tend to contain information that is complementary to the image Alikhani et al. (2020) since people tend not to not restate what can already be observed in the photographs they post (Hodosh et al., 2013). Now to perform well on downstream classification tasks, we ideally want model representations that encode salient image objects. Recall that in contrastively-trained models, the learned representations are determined by the transformation T (x) (captions for CLIP). This suggests a hypothesis: pretraining CLIP with descriptive captions will yield more transferrable (vision) representations. To test this, we need to quantify the descriptiveness of a caption. Since doing so precisely is infeasible, we approximate descriptiveness using a pre-trained caption-scoring model. Specifically, we leverage the BLIP model (Li et al., 2022) which has shown state-of-the-art performance on image-based text retrieval. We then measure the average score assigned by BLIP to dataset captions matching their corresponding images—see Figure 2(right). As expected based on our earlier subjective assessment as well as prior work (Hodosh et al., 2013; Kreiss et al., 2021), we indeed find that the caption descriptiveness of COCO > CC > YFCC (see Appendix B.1 for a discussion of relevant vs. noisy captions.) Furthermore, we see that the descriptiveness of captions in the pre-training data directly correlates with CLIP’s transfer performance. In fact, a CLIP model trained on 100K descriptive image-caption pairs from COCO attains performance comparable to one trained on 2x and 5x more samples from CC and YFCC respectively. To further corroborate our hypothesis, we train CLIP CC and YFCC with “more descriptive” captions by re-captioning the images using BLIP (Li et al., 2022). Indeed, we find that CLIP trained on 100K CC/YFCC samples with BLIP captions no longer performs worse than its COCO counterpart (see Figure 2(right)). This indicates that CLIP’s sensitivity to the pre-training corpus is not just an artifact of differing image distributions, but due to the presence (or absence) of descriptive captions. 3.3 THE EFFECT OF INTRA-DATASET VARIATIONS IN CAPTIONS Image captions (Figure 1) seem to vary in how they describe an object (e.g., “duffel van” or “car”) and the parts of the image they focus on (e.g., discussing the “street” or “brick”). We now study how these lexical and focus variations in captions CLIP’s ability to learn meaningful representations. A simple setting. As a starting point, we investigate this effect on the COCO dataset using synthetic captions—constructed using the available multi-object labels—whereby we can precisely control the intra-dataset captions variations. In an attempt to simulate the variations we observe in Figure 1, we design the captions to (not) be: (i) consistent: use a fixed term or random synonyms to describe an object across the dataset (lexical variations); and (ii) complete: mention all or a random subset of image objects (focus variations). (See Appendix A.6 for details and Appendix Figure 8 for examples.) Surprisingly, we find that a CLIP model trained with complete and consistent synthetic COCO captions outperforms a model trained on human-written captions (cf. row 1 in Figure 4(left) to row 3 in Table 1). However, dropping these two conditions causes the transfer performance of the model to drop significantly (cf. rows 1, 2, and 4 in Figure 4(left)). These findings suggest that variability in dataset captions can have an adverse effect on the resulting CLIP models. The effect of stochasticity. We now revisit our stochastic CLIP variant, CLIPS, in this simple setting. Intuitively, we might expect that sampling from a set of diverse captions per image—which cover possibile lexical and stylistic variations—during training might alleviate the adverse effects of caption variability. Indeed, we find that for synthetic COCO captions, CLIPS is not as affected by caption inconsistency and/or incompleteness. The ∼2% improvement of CLIPS over CLIP here mirrors the 3.6% gain seen for human-provided captions (cf. Table 1). These findings suggest that one of the reasons why stochasticity significantly boosts CLIP’s performance is its role in (caption) variance reduction. We also find that CLIPS transfers 2% better when trained on human-provided captions as opposed to synthetic ones (unlike CLIP). This indicates that human-written captions do contain useful information that is not present in object labels alone. However, extracting this signal is not straightforward, and may require incorporating multiple captions into CLIP training. Datasets in practice. We now attempt to characterize caption variability in real-world datasets. Inspired by prior work in natural language processing and linguistics (Appendix A.8), for a set of dataset captions, we measure: (i) the total number of unique n-grams (N=1-3) (Li et al., 2015; Fung et al., 2020) and (ii) measure of textual lexical diversity (MTLD) (McCarthy & Jarvis, 2010). Along both these axes of variability, we see that COCO < CC < YFCC (Figure 4(right)). Thus, aside from the lower descriptiveness of YFCC (and to a lesser extent CC) captions, their variability could be the reason why the resulting CLIP models have worse transfer performance (Figure 2(left)). This also explains why scale is essential to benefit from language supervision on CC and YFCC. After all, CLIP would need to be trained on more captions to even encounter the same words twice. How many captions are enough? We saw above that “text data augmentations” via CLIPS could reduce the adverse impacts of caption variability. We now analyze how this effect scales with the number of available captions per image on the CC and YFCC datasets. Here, we use the BLIP captioning model to generate multiple captions per image via nucleus sampling (Holtzman et al., 2020). This procedure is intended to serve as a proxy for the manual caption annotation or automated scraping procedures that might be used for data collection in practice. We observe in Figure 5(left), that CLIPS improves as the number of available captions per image increases (plateauing around 10). However, scaling up the overall number of image-caption pairs appears to be far more effective than incorporating more captions per image (at least those obtained via BLIP) from the perspective of improving transfer performance (see Figure 5(right)). Note that the exact trade-offs and their costs are context dependent and vary based on the exact procedure used for caption collection. 4 MAKING EXISTING CAPTIONS WORK So far, we identified three properties of the pre-training data that influence CLIP’s transfer performance: (i) scale, (ii) caption descriptiveness, and (ii) caption variability. Our analysis shows that one way to improve CLIP’s performance, especially on uncurated data sources, is to simply pre-train with more data. Alternatively, for a fixed data scale, we may be able to obtain better CLIP models if we improve what captions describe and how they describe an image. We now focus on the latter and put forth simple dataset interventions to improve transfer capabilities of CLIP-style models. Data pre-processing: Given the importance of caption descriptiveness, we might consider preprocessing scraped data to select for samples with this property. The CC data collection procedure (Sharma et al., 2018) partially demonstrates the effectiveness of this approach, as pre-training CLIP on CC samples leads to better transfer performance than a comparable number of “raw” YFCC ones. However, due to its reliance on the Google Vision API, this procedure can be quite expensive, with costs scaling with the size of the scraped data. Recent works have taken a different approach, using pre-trained image-language models (like CLIP) to filter data (Schuhmann et al., 2021). However, since we are interested in building such models in the first place, we avoid taking this route. Instead, we focus on understanding how far we can get by simply discarding low quality captions, agnostic to the images. We take inspiration from the filtering pipelines used to build large language models (Brown et al., 2020). Here, raw Internet data is cleaned by selecting samples that are “similar” to known high-quality datasets (e.g., Wikipedia). Taking a similar approach, we train a linear classifier on a bag-of-n-grams sentence embeddings (Joulin et al., 2017) to distinguish validation set CC/YFCC captions from COCO ones. This classifier is then used to filter CC/YFCC, only retaining samples that are predicted as being COCO-like. This simple procedure does end up selecting for captions that are more focused on objects and their descriptions, as opposed to describing contextual properties such as dates or urls—see Appendix A.9. For a given pre-training data budget, we see moderate gains (∼ 2%) from using this heuristic to filter datasets—see Table 2 (left). Mitigating caption variability: As we saw in Section 3.3, models trained with CLIPS are less impacted by caption variability. However, typical image-captioning datasets (such as CC and YFCC) only have one caption per image. We thus devise a methodology to augment these captions by leveraging recent open-source large language models (Wang & Komatsuzaki, 2021). Concretely, we provide GPT-J with 4 (caption, paraphrase) pairs as in-context (Brown et al., 2020) examples. We then prompt it to paraphrase a given target caption. By sampling from GPT-J, we can obtain multiple (in our case, five) paraphrases for every such caption (examples in Appendix Figure 10). In Table 2 (right), we see that feeding these captions into CLIPS results in a considerable performance boost over CLIP (trained with a single caption/image). For instance, for COCO, CLIPS trained on our generated captions bridges more than half of the performance gap between vanilla CLIP and CLIPS trained with five human-provided captions. 5 RELATED WORK Representation learning. Building models with general representations that transfer to downstream tasks has been a long-standing goal in ML (Donahue et al., 2014; Razavian et al., 2014; Chatfield et al., 2014; Agrawal et al., 2014; Yosinski et al., 2014). Our work is in line with prior studies aimed at characterizing the effect of design choices made during training (Azizpour et al., 2015; Huh et al., 2016; Chu et al., 2016; Kornblith et al., 2019; Zhai et al., 2019; Locatello et al., 2020), e.g. model architecture, datasets and loss functions, on learned representations. The utility of language in vision. There is a long line of work on leveraging language to improve vision models (Quattoni et al., 2007; Srivastava & Salakhutdinov, 2012; Frome et al., 2013; Baltrušaitis et al., 2018; Guo et al., 2019). Recent studies have sought to investigate how integral language is to the performance of such multi-modal models. Fang et al. (2022) study a different property of CLIP—zero-shot robustness, rather than transfer learning—and show that it is comparable to that of a supervised classifier trained on the same YFCC images. Therefore, they conclude that data distribution is more important than language supervision. In concurrent work, (Nguyen et al., 2022) study the sensitivity of CLIP’s zero-shot robustness to the pre-training dataset. However, unlike our work, they do not: (i) contrast CLIP against image-only methods trained on the same corpora, and (ii) attempt to explain what properties of the data are responsible for CLIP’s sensitivity. Ruan et al. (2022) argue theoretically that the robustness of linear probes on CLIP’s representations stems from pretraining with a large and diverse set of images and domain-agnostic augmentations T (x). Most similar to our work are the studies by (Desai & Johnson, 2021) and Devillers et al. (2021), which study the role of language supervision on transfer performance in the context of VirTex (a CLIP precursor) and CLIP respectively. Notably, the two works draw opposite conclusions as to the utility of language compared to purely image-based approaches. This difference stems from the fact that neither of the works attempt to directly control for algorithmic, architectural, and data-related confounders. Our work performs a substantially more controlled study on the effect of language supervision, allowing us to make more direct claims than these works. 6 DISCUSSION Our work takes a step towards resolving the debate as to whether multi-modality, and language in particular, can improve visual representation learning. A comparison of CLIP with a matched imageonly SimCLR model reveals that neither form of supervision (using images alone or coupled with language) is strictly better than the other. Indeed, there are practical regimes where CLIP’s performance cannot be matched using SimCLR with any amount of image data and others where language supervision is harmful. This is a direct consequence of CLIP’s sensitivity to its pre-training data, especially its scale, descriptiveness, and variability of the captions. Through our analysis, we also discovered algorithmic improvements (CLIPS) and dataset modifications (filtering and augmenting captions) to better take advantage of language supervision. Limitations. Our exploration allows us to quantify the utility of language supervision (over using images alone) in a specific setting: transfer learning via probing on certain object recognition tasks (Kornblith et al., 2019). We view expanding the scope of our analysis as a direction for future work. Further, despite the significant steps we took to control the differences between CLIP and SimCLR, there are still some inconsistencies that have not been accounted for (discussed in Section 2). Nevertheless, the differences between our and previous results (e.g, Desai & Johnson, 2021; Devillers et al., 2021) suggest that we successfully pinned down some crucial confounders (architecture, augmentations, stochasticity, datasets, hyperparameters). ETHICS STATEMENT Below, we discuss certain ethical concerns pertaining to our work: • Although we rely on existing open source vision/multi-modal datasets for our analysis, prior work has raised concerns about some of these (or other similarly-sourced ones) being biased (Stock & Cisse, 2017; Yang et al., 2020; Birhane et al., 2021; Paullada et al., 2021) and violating privacy (Prabhu & Birhane, 2020; Yang et al., 2022). • Our focus is on understanding the extent to which CLIP’s representations are influenced by what the captions they are trained on describe. However, we sidestep whether or not this is always desirable. After all, recent studies (Birhane et al., 2021) show that vision-linguistic datasets have various biases and stereotypes, which we might not want our models to learn. • In Section 4, we use large language models (in particular, GPT-J) to augment dataset captions via in-context learning. These models however are known to have their own limitations that might percolate into the generated captions. REPRODUCIBILITY STATEMENT Datasets: All the pre-training/transfer learning datasets we use are open-source. In the supplementary material, we include certain variants of the COCO/CC/YFCC datasets we created as CSV files: namely synthetic COCO captions, filtered CC/YFCC samples, and GPT-J paraphrased captions. Code and hyperparameters: We discuss implementation details including hyperparameter settings in Appendix A. We also include the code for training models in the supplementary material. ACKNOWLEDGEMENTS We are grateful to Niladri Chatterji, Elisa Kreiss, Nimit Sohoni and Dimitris Tsipras for helpful discussions. SS is supported by Open Philanthropy, YD by a Knights-Hennessy Scholarship, and RT by the NSF GRFP under Grant No. DGE 1656518. We also thank Stanford HAI for a Google Cloud credits grant.
1. What is the main contribution of the paper regarding vision-language representation pretraining? 2. What are the strengths and weaknesses of the paper's approach, particularly in its experimental design and evaluation metrics? 3. Do you have any concerns or doubts about the proposed data augmentation methods, especially regarding potential knowledge leakage from the BLIP model? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper In this paper, the author investigates different popular datasets to explore the influence of the style, and scale of a dataset on vision-language representation pretraining task. The authors find that a dataset with sufficiently large and descriptive captions will be helpful in the vision-language representation learning and the representation learning can be negatively influenced by variability of captions. According to the findings, the author proposed data augmentation methods based on BLIP to boost the pretraining performance of the methods trained with different variants of datasets. Strengths And Weaknesses Strength: Raise an interesting topic that how many and what kind of captions are helpful for the vision-language pretraining. Give out some ideas of what kind of a vision-language dataset may help model to learn better representations. Sufficient experiments on different kinds of setting and scale of image-caption training sets to show the influence of different aspects of captions on the representation learning. Weakness: Some phenomena can be further explained for the reader to get better understanding. For example, Why cannot the “more descriptive” BLIP-generated captions beat the performance of COCO dataset? The paper evaluated the performance of representation learning only with classification-based transfer performance, while some other tasks can also be used to evaluate the capability of representations, i.e. image-text matching, and image textual grounding. The experiments from multiple different kinds of tasks can better support your claim. Some experiments can be more completed. i.e. CLIP_s 2 and 10 in the left table of Fig. 5. The proposed data augmentation methods are based on BLIP model which is also a supervised trained with large scale datasets. I doubt if the learned knowledge will be leaked during the data augmentation process especially in the image captioning process mentioned in Section 3.3. This can be view as a kind of knowledge distillation of BLIP model which makes it less supportive for the claim. Clarity, Quality, Novelty And Reproducibility The problem raised in this paper, which to explore the influence of setting and scale of vision-language dataset on the representation learning, is that somewhat novel and interesting. The paper is clear and easy to read. The experiment is shown to be reproduced easily.
ICLR
Title Is a Caption Worth a Thousand Images? A Study on Representation Learning Abstract The development of CLIP (Radford et al., 2021) has sparked a debate on whether adding language supervision can yield vision models with more transferable representations than traditional image-only methods. Our work studies this question through a carefully controlled comparison of two approaches, in terms of their ability to learn representations that generalize to downstream classification tasks. We find that when the pre-training data meets certain criteria—it is sufficiently large and contains descriptive captions with low variability—-image-only methods do not match CLIP’s performance even when they are trained with more image data. However, contrary to what one might expect, there are practical settings in which these criteria are not met, wherein added supervision through captions is actually detrimental. Motivated by our findings, we devise simple data and algorithmic interventions to improve the transfer performance of CLIP-style models. 1 INTRODUCTION Image-based contrastive learning approaches have shown promise in building models that generalize beyond the data distributions they are trained on (Wu et al., 2018; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Chen et al., 2020b; Caron et al., 2021). By leveraging large (unlabelled) data sources via self-supervised training, these models learn representations that transfer to diverse image classification tasks—more so than their supervised counterparts (Ericsson et al., 2021). Recently, Radford et al. (2021) showed that a different approach—contrastive learning with language supervision—can yield models (CLIP) with remarkable transfer capabilities. This development has garnered significant interest in the vision and natural language processing communities alike, leading to a debate on the utility of multi-modality in visual representation learning (Zhai et al., 2022; Devillers et al., 2021; Fang et al., 2022). Our work focuses on a specific question within this debate: Does added language supervision lead to more transferable visual representations than using images alone? It might seem like the answer to this question is obvious. After all, CLIP utilized caption information unavailable to traditional image-based approaches and showed substantial gains over them (Radford et al., 2021). However, CLIP is drastically different from these approaches in many ways, from training data to fine-grained implementation choices, which makes it difficult to isolate the contribution of language supervision (see Section 5). Further, recent studies on CLIP’s zero-shot classification and robustness properties cast doubt on whether adding language supervision is always beneficial (Fang et al., 2022). Resolving the aforementioned debate thus requires a carefully controlled comparison of the two approaches in which the only difference is the form of supervision. Our contributions. We devise a methodology to assess the utility of language supervision in CLIP1 from a visual representation learning standpoint. To do so, we recognize that CLIP pretraining and popular image-based methods share the same underlying primitive of contrastive learning. Specifically, Radford et al. (2021)’s approach is strikingly similar to SimCLR (Chen et al., 2020a). The only irreducible difference between them is whether supervision is provided to the 1We use CLIP to refer to models trained with Radford et al. (2021)’s approach, not their pre-trained model. model via image augmentations or image-caption matching (see Figure 1)—which is precisely the quantity we want to study. Thus, we can disentangle the effect of language supervision on visual representations by comparing matched versions of SimCLR and CLIP (trained from scratch). Our focus, in particular, is on how well the learned representations transfer to varied image classification tasks. We find that the picture is nuanced and depends on three properties of the pre-training data: 1. When the scale of the dataset is sufficiently large, CLIP’s visual representations indeed transfer better than their matched image-only SimCLR counterparts. In fact, this gap is not bridged by training SimCLR with more (image) data, suggesting that a caption can be worth more than any number of images. However, in the low-data regime, language supervision actually hurts model performance both in and out-of-distribution. 2. The descriptiveness (Kreiss et al., 2021) of captions—i.e., the extent to which they refer to what is contained in an image—directly determines how well CLIP models transfer. In fact, we find that a single descriptive image-caption pair (e.g., from COCO (Lin et al., 2014)) is worth five less descriptive, uncurated captions (e.g., from YFCC (Thomee et al., 2016)). 3. The variability of captions (e.g. stylistic or lexical) within a dataset can impair CLIP’s performance. We find that a modification to standard CLIP training—performing text augmentations by sampling from a pool of captions for each image—can alleviate this drop. These properties have inter-twined effects on CLIP’s performance: e.g., dataset scale can, to some extent, compensate for less-descriptive and/or varied captions. Guided by our findings, we devise simple datasets interventions that can lead to more-transferrable CLIP models: (i) filtering out lowquality captions with a text-based classifier, and (ii) applying data augmentation to captions by paraphrasing them using pre-trained language models. 2 AN APPLES-TO-APPLES COMPARISON Prior works have studied image-only and image-language pre-training methods in isolation (Wu et al., 2018; He et al., 2020; Chen et al., 2020a; Caron et al., 2020; Chen et al., 2020b;b; Chen & He, 2021; Caron et al., 2021; Radford et al., 2021) and side-by-side (Desai & Johnson, 2021; Devillers et al., 2021; Fang et al., 2022). Yet, they provide incomplete (and often contradictory) answers to our motivating question of the value of language supervision relative to using images alone (Section 5). Crucially, this is due to various confounders such as: (i) bespoke algorithmic optimizations within the two methods, and (ii) differing pre-training datasets. In this section, we outline a series of steps that we take to mitigate these confounders and compare the two methods on equal footing. 2.1 FINDING COMMON GROUND Our approach for studying the value of language supervision is guided by the following insight: CLIP pre-training is strikingly similar to the popular image-only SimCLR method (Chen et al., 2020a)2. Both methods rely on the same algorithmic primitive of contrastive learning, which we illustrate in Figure 1. Specifically, the (CLIP/SimCLR) model is trained cross-entropy based objective, which for a given pair (x, x+) of positive examples with with associated negatives N is: ℓ = − log exp(sim(z, z+)/τ)∑ n∈N∪{z+} exp(sim(z, zn)/τ) , where z = g(ϕ(x)) and z+/n = g′(ϕ′(x+/n)), (1) sim is cosine similarity, ϕ/ϕ′ are encoders, and g/g′ are projection heads. Positive examples x+ are obtained through a transformation of the image x, i.e., x+ ∼ T (x)—such as image augmentations (e.g., rotations or crops) in SimCLR and captions in CLIP. Observe that this difference in T (·) between CLIP and SimCLR corresponds exactly to whether the model is trained with language, which is the quantity we want to study. Thus, to isolate the role of added language supervision, we can compare the downstream performance of matched CLIP and SimCLR models. To this end, we must take some steps to make their implementatations consistent: • Datasets: Typically, CLIP and SimCLR are trained on different datasets, as the former requires image-caption pairs, while the latter can leverage any image data. To control for the effect of the data distribution, we pre-train both models from scratch on the same data. • Architecture: We use the ResNet-50 (He et al., 2016) architecture as the image encoder for both methods, and a Transformer (Vaswani et al., 2017) as the text encoder in CLIP. We also extensively tune hyperparameters for both methods (Appendix A.3). • Augmentations: Both methods apply data augmentations to the image x itself at each training step. However, the augmentations used in SimCLR (resize, crop, flip, jitter, blur, grayscale) are far more sophisticated than those in CLIP (resize and crop). We remove this confounder by using SimCLR augmentations unless otherwise specified. • Transformation stochasticity: The two methods differ in how they obtain x+, not just due to the choice of T (x) but also the generative process itself. In SimCLR , x+ is a new random draw from T (x) in every batch, while for CLIP, it is a single fixed caption. Perfectly matching them requires training CLIP by sampling a fresh caption x+ for each image at each iteration. We will refer to this stochastic version of CLIP as CLIPS. Mismatches. Despite our efforts to match CLIP with SimCLR, some inconsistencies remain– partly due to their differing input modalities. In particular, CLIP (and CLIPS): (i) Processes T (x) using a text transformer rather than SimCLR’s ResNet-50. (ii) Does not share weights between the encoders processing x and T (x) because they corre- spond to different modalities, unlike SimCLR. (iii) Uses a linear projection head g/g′ instead of SimCLR’s MLP, which we allow as Radford et al. (2021) showed that this choice does not affect CLIP’s performance. (iv) Only uses other examples in the batch from the same modality as negatives. Thus CLIP has half the number of negatives compared to SimCLR, which also uses transformed versions of other examples in the batch (i.e. both x̂ and x̂+) as negatives. We now assess how the representations learned by our matched CLIP and SimCLR models compare. In particular, we measure how well their representations transfer to the downstream tasks from Kornblith et al. (2019). Akin to (Radford et al., 2021), we focus on the fixed-feature setting, where we freeze the weights of a given model and then train a linear probe using task data (see Appendix A). 2.2 A CASE STUDY We begin by comparing CLIP and SimCLR models trained on the MS-COCO dataset (Lin et al., 2014) (henceforth referred to as COCO), which contains ∼120K images with multi-object labels. Each image has five human-provided captions, collected post-hoc by Chen et al. (2015) using Mechanical Turk. Annotators were given detailed instructions on how to caption an image such as to describe only the important parts of the image and not to use proper names. We use COCO as our 2Other image-based methods (He et al., 2020; Chen et al., 2020b; Chen & He, 2021; Caron et al., 2021) have optimizations that are not present in CLIP. starting point for two reasons. First, we can assess the utility of language supervision in the ideal setting where the captions are of fairly high quality due to the careful curation process. Second, we can approximate CLIPS3 by sampling from the available set of five captions per image. Captions (often) help on COCO. In Table 1, we compare various COCO pre-trained models (supervised, SimCLR, CLIP/CLIPS) in terms of the accuracy of a linear probe on: (i) COCO classification (in distribution), and (ii) transfer tasks. Note that to contrast image-only and image-language supervision, the “right” comparison is between SimCLR and CLIPS: they are matched (to the best of our abilities) in terms of dataset, architecture, augmentations and stochasticity. We find that: 3 THE IMPACT OF PRE-TRAINING DATA Our analysis of COCO shows that language supervision can be beneficial over using images alone. That being said, the datasets that CLIP is typically trained on differ, both in scale and quality, from COCO. For instance, COCO captions were collected post-hoc under controlled settings, which is markedly different from the automatic scraping procedure used to gather data at scale. Thus, we shift our focus to two frequently-used (Ilharco et al., 2021) CLIP training datasets: ConceptualCaptions (Sharma et al., 2018) (CC) contains ∼3.3M images harvested from web, with their ALT-text attributes as captions. The data was filtered for text quality—e.g., well-formed captions that mention at least one object found via the Google Cloud Vision API. Furthermore, all proper nouns in the captions were hypernymized (e.g., ”Justin Timberlake” becomes ”pop artist”). Yahoo Flickr Creative Commons (Thomee et al., 2016) (YFCC): This dataset has ∼ 99.2M images from Flickr, along with their posted titles as captions with no post-processing. 3We henceforth overload notation and use CLIPS to denote: (i) the idealized stochastic version of CLIP, which samples from infinite captions per image, and (ii) our approximation of it with a finite set of captions. Do captions still help? We start by comparing the transfer performance of CLIP and SimCLR on 100K subsets of COCO/CC/YFCC in Figure 2(left). We observe that SimCLR’s transfer capabilities do not vary much across pre-training datasets, while CLIP’s performance is highly sensitive to them. With 100K samples from CC/YFCC, using CLIP is worse than image-only pre-training via SimCLR—unlike what we see for COCO. The sensitivity of CLIP to pre-training data. Inspecting dataset samples (Figure 3) yields a possible explanation for this sensitivity. The three datasets differ not just in scale and image diversity, but also the extent to which captions: (i) describe visually salient aspects of the image, and (ii) vary across images (e.g., in style and wording). For instance, COCO captions are homogenous and descriptive, while YFCC ones vary and are often complementary to the image. We now study the effect these dataset properties—scale, descriptiveness, and variability—have on CLIP’s performance. 3.1 SCALE MATTERS A major appeal of contrastive learning methods is that they can leverage the vast amounts of unlabeled data available on the Internet. Thus, it is natural to ask how different forms of contrastive supervision benefit from added pre-training data. We may expect image-only methods to perform worse for smaller datasets as they are less likely to encounter (augmented) images which are similar. We might further expect image-language models to perform more favorably in this setting since they receive richer supervision. To test whether this is the case, we compare CLIP and SimCLR models trained on datasets of varying sizes: 10-100K samples for COCO, and 100K-2M for CC/YFCC. Our results in Figure 2(left) deviate from our earlier expectations. First, beyond a certain point, SimCLR’s transfer performance improves only marginally with additional data. While surprising, similar effects have been noted previously (Tian et al., 2021; Cole et al., 2022), especially when the data is uncurated (e.g., YFCC) (Tian et al., 2021). Second, in the low-data regime (<50K/200K/500K for COCO/CC/YFCC), training with language actually hurts the models’ transfer performance. In fact, (data) scale seems to be essential to benefit from language supervision. With sufficient data, CLIP outperforms SimCLR on all three datasets. This gap remains even if we train SimCLR with extra data, indicating that captions can be worth more than any number of images. 3.2 THE IMPORTANCE OF DESCRIPTIVE CAPTIONS Prior work in linguistics and accessibility has drawn a distinction between image “descriptions” and “captions” (Berger & Dibb, 2003; Chandler, 2007; Hodosh et al., 2013; Bernardi et al., 2016; van Miltenburg, 2020; Kreiss et al., 2021; Dognin et al., 2022; Hutchinson et al., 2022). In particular, Bernardi et al. (2016) define descriptions as texts that “verbalize what can be seen in the image, i.e., they refer to the objects, actions, and attributes depicted, mention the scene type, etc.”. In contrast, Panofsky (1939) suggest that a typical caption “provides personal, cultural, or historical context for the image.” This line of work suggests that COCO captions are more descriptive due to the decontextualization of the image and strict instructions provided to the annotators during the caption generation process (Kreiss et al., 2021). In contrast, Flickr captions (e.g., in CC/YFCC) tend to contain information that is complementary to the image Alikhani et al. (2020) since people tend not to not restate what can already be observed in the photographs they post (Hodosh et al., 2013). Now to perform well on downstream classification tasks, we ideally want model representations that encode salient image objects. Recall that in contrastively-trained models, the learned representations are determined by the transformation T (x) (captions for CLIP). This suggests a hypothesis: pretraining CLIP with descriptive captions will yield more transferrable (vision) representations. To test this, we need to quantify the descriptiveness of a caption. Since doing so precisely is infeasible, we approximate descriptiveness using a pre-trained caption-scoring model. Specifically, we leverage the BLIP model (Li et al., 2022) which has shown state-of-the-art performance on image-based text retrieval. We then measure the average score assigned by BLIP to dataset captions matching their corresponding images—see Figure 2(right). As expected based on our earlier subjective assessment as well as prior work (Hodosh et al., 2013; Kreiss et al., 2021), we indeed find that the caption descriptiveness of COCO > CC > YFCC (see Appendix B.1 for a discussion of relevant vs. noisy captions.) Furthermore, we see that the descriptiveness of captions in the pre-training data directly correlates with CLIP’s transfer performance. In fact, a CLIP model trained on 100K descriptive image-caption pairs from COCO attains performance comparable to one trained on 2x and 5x more samples from CC and YFCC respectively. To further corroborate our hypothesis, we train CLIP CC and YFCC with “more descriptive” captions by re-captioning the images using BLIP (Li et al., 2022). Indeed, we find that CLIP trained on 100K CC/YFCC samples with BLIP captions no longer performs worse than its COCO counterpart (see Figure 2(right)). This indicates that CLIP’s sensitivity to the pre-training corpus is not just an artifact of differing image distributions, but due to the presence (or absence) of descriptive captions. 3.3 THE EFFECT OF INTRA-DATASET VARIATIONS IN CAPTIONS Image captions (Figure 1) seem to vary in how they describe an object (e.g., “duffel van” or “car”) and the parts of the image they focus on (e.g., discussing the “street” or “brick”). We now study how these lexical and focus variations in captions CLIP’s ability to learn meaningful representations. A simple setting. As a starting point, we investigate this effect on the COCO dataset using synthetic captions—constructed using the available multi-object labels—whereby we can precisely control the intra-dataset captions variations. In an attempt to simulate the variations we observe in Figure 1, we design the captions to (not) be: (i) consistent: use a fixed term or random synonyms to describe an object across the dataset (lexical variations); and (ii) complete: mention all or a random subset of image objects (focus variations). (See Appendix A.6 for details and Appendix Figure 8 for examples.) Surprisingly, we find that a CLIP model trained with complete and consistent synthetic COCO captions outperforms a model trained on human-written captions (cf. row 1 in Figure 4(left) to row 3 in Table 1). However, dropping these two conditions causes the transfer performance of the model to drop significantly (cf. rows 1, 2, and 4 in Figure 4(left)). These findings suggest that variability in dataset captions can have an adverse effect on the resulting CLIP models. The effect of stochasticity. We now revisit our stochastic CLIP variant, CLIPS, in this simple setting. Intuitively, we might expect that sampling from a set of diverse captions per image—which cover possibile lexical and stylistic variations—during training might alleviate the adverse effects of caption variability. Indeed, we find that for synthetic COCO captions, CLIPS is not as affected by caption inconsistency and/or incompleteness. The ∼2% improvement of CLIPS over CLIP here mirrors the 3.6% gain seen for human-provided captions (cf. Table 1). These findings suggest that one of the reasons why stochasticity significantly boosts CLIP’s performance is its role in (caption) variance reduction. We also find that CLIPS transfers 2% better when trained on human-provided captions as opposed to synthetic ones (unlike CLIP). This indicates that human-written captions do contain useful information that is not present in object labels alone. However, extracting this signal is not straightforward, and may require incorporating multiple captions into CLIP training. Datasets in practice. We now attempt to characterize caption variability in real-world datasets. Inspired by prior work in natural language processing and linguistics (Appendix A.8), for a set of dataset captions, we measure: (i) the total number of unique n-grams (N=1-3) (Li et al., 2015; Fung et al., 2020) and (ii) measure of textual lexical diversity (MTLD) (McCarthy & Jarvis, 2010). Along both these axes of variability, we see that COCO < CC < YFCC (Figure 4(right)). Thus, aside from the lower descriptiveness of YFCC (and to a lesser extent CC) captions, their variability could be the reason why the resulting CLIP models have worse transfer performance (Figure 2(left)). This also explains why scale is essential to benefit from language supervision on CC and YFCC. After all, CLIP would need to be trained on more captions to even encounter the same words twice. How many captions are enough? We saw above that “text data augmentations” via CLIPS could reduce the adverse impacts of caption variability. We now analyze how this effect scales with the number of available captions per image on the CC and YFCC datasets. Here, we use the BLIP captioning model to generate multiple captions per image via nucleus sampling (Holtzman et al., 2020). This procedure is intended to serve as a proxy for the manual caption annotation or automated scraping procedures that might be used for data collection in practice. We observe in Figure 5(left), that CLIPS improves as the number of available captions per image increases (plateauing around 10). However, scaling up the overall number of image-caption pairs appears to be far more effective than incorporating more captions per image (at least those obtained via BLIP) from the perspective of improving transfer performance (see Figure 5(right)). Note that the exact trade-offs and their costs are context dependent and vary based on the exact procedure used for caption collection. 4 MAKING EXISTING CAPTIONS WORK So far, we identified three properties of the pre-training data that influence CLIP’s transfer performance: (i) scale, (ii) caption descriptiveness, and (ii) caption variability. Our analysis shows that one way to improve CLIP’s performance, especially on uncurated data sources, is to simply pre-train with more data. Alternatively, for a fixed data scale, we may be able to obtain better CLIP models if we improve what captions describe and how they describe an image. We now focus on the latter and put forth simple dataset interventions to improve transfer capabilities of CLIP-style models. Data pre-processing: Given the importance of caption descriptiveness, we might consider preprocessing scraped data to select for samples with this property. The CC data collection procedure (Sharma et al., 2018) partially demonstrates the effectiveness of this approach, as pre-training CLIP on CC samples leads to better transfer performance than a comparable number of “raw” YFCC ones. However, due to its reliance on the Google Vision API, this procedure can be quite expensive, with costs scaling with the size of the scraped data. Recent works have taken a different approach, using pre-trained image-language models (like CLIP) to filter data (Schuhmann et al., 2021). However, since we are interested in building such models in the first place, we avoid taking this route. Instead, we focus on understanding how far we can get by simply discarding low quality captions, agnostic to the images. We take inspiration from the filtering pipelines used to build large language models (Brown et al., 2020). Here, raw Internet data is cleaned by selecting samples that are “similar” to known high-quality datasets (e.g., Wikipedia). Taking a similar approach, we train a linear classifier on a bag-of-n-grams sentence embeddings (Joulin et al., 2017) to distinguish validation set CC/YFCC captions from COCO ones. This classifier is then used to filter CC/YFCC, only retaining samples that are predicted as being COCO-like. This simple procedure does end up selecting for captions that are more focused on objects and their descriptions, as opposed to describing contextual properties such as dates or urls—see Appendix A.9. For a given pre-training data budget, we see moderate gains (∼ 2%) from using this heuristic to filter datasets—see Table 2 (left). Mitigating caption variability: As we saw in Section 3.3, models trained with CLIPS are less impacted by caption variability. However, typical image-captioning datasets (such as CC and YFCC) only have one caption per image. We thus devise a methodology to augment these captions by leveraging recent open-source large language models (Wang & Komatsuzaki, 2021). Concretely, we provide GPT-J with 4 (caption, paraphrase) pairs as in-context (Brown et al., 2020) examples. We then prompt it to paraphrase a given target caption. By sampling from GPT-J, we can obtain multiple (in our case, five) paraphrases for every such caption (examples in Appendix Figure 10). In Table 2 (right), we see that feeding these captions into CLIPS results in a considerable performance boost over CLIP (trained with a single caption/image). For instance, for COCO, CLIPS trained on our generated captions bridges more than half of the performance gap between vanilla CLIP and CLIPS trained with five human-provided captions. 5 RELATED WORK Representation learning. Building models with general representations that transfer to downstream tasks has been a long-standing goal in ML (Donahue et al., 2014; Razavian et al., 2014; Chatfield et al., 2014; Agrawal et al., 2014; Yosinski et al., 2014). Our work is in line with prior studies aimed at characterizing the effect of design choices made during training (Azizpour et al., 2015; Huh et al., 2016; Chu et al., 2016; Kornblith et al., 2019; Zhai et al., 2019; Locatello et al., 2020), e.g. model architecture, datasets and loss functions, on learned representations. The utility of language in vision. There is a long line of work on leveraging language to improve vision models (Quattoni et al., 2007; Srivastava & Salakhutdinov, 2012; Frome et al., 2013; Baltrušaitis et al., 2018; Guo et al., 2019). Recent studies have sought to investigate how integral language is to the performance of such multi-modal models. Fang et al. (2022) study a different property of CLIP—zero-shot robustness, rather than transfer learning—and show that it is comparable to that of a supervised classifier trained on the same YFCC images. Therefore, they conclude that data distribution is more important than language supervision. In concurrent work, (Nguyen et al., 2022) study the sensitivity of CLIP’s zero-shot robustness to the pre-training dataset. However, unlike our work, they do not: (i) contrast CLIP against image-only methods trained on the same corpora, and (ii) attempt to explain what properties of the data are responsible for CLIP’s sensitivity. Ruan et al. (2022) argue theoretically that the robustness of linear probes on CLIP’s representations stems from pretraining with a large and diverse set of images and domain-agnostic augmentations T (x). Most similar to our work are the studies by (Desai & Johnson, 2021) and Devillers et al. (2021), which study the role of language supervision on transfer performance in the context of VirTex (a CLIP precursor) and CLIP respectively. Notably, the two works draw opposite conclusions as to the utility of language compared to purely image-based approaches. This difference stems from the fact that neither of the works attempt to directly control for algorithmic, architectural, and data-related confounders. Our work performs a substantially more controlled study on the effect of language supervision, allowing us to make more direct claims than these works. 6 DISCUSSION Our work takes a step towards resolving the debate as to whether multi-modality, and language in particular, can improve visual representation learning. A comparison of CLIP with a matched imageonly SimCLR model reveals that neither form of supervision (using images alone or coupled with language) is strictly better than the other. Indeed, there are practical regimes where CLIP’s performance cannot be matched using SimCLR with any amount of image data and others where language supervision is harmful. This is a direct consequence of CLIP’s sensitivity to its pre-training data, especially its scale, descriptiveness, and variability of the captions. Through our analysis, we also discovered algorithmic improvements (CLIPS) and dataset modifications (filtering and augmenting captions) to better take advantage of language supervision. Limitations. Our exploration allows us to quantify the utility of language supervision (over using images alone) in a specific setting: transfer learning via probing on certain object recognition tasks (Kornblith et al., 2019). We view expanding the scope of our analysis as a direction for future work. Further, despite the significant steps we took to control the differences between CLIP and SimCLR, there are still some inconsistencies that have not been accounted for (discussed in Section 2). Nevertheless, the differences between our and previous results (e.g, Desai & Johnson, 2021; Devillers et al., 2021) suggest that we successfully pinned down some crucial confounders (architecture, augmentations, stochasticity, datasets, hyperparameters). ETHICS STATEMENT Below, we discuss certain ethical concerns pertaining to our work: • Although we rely on existing open source vision/multi-modal datasets for our analysis, prior work has raised concerns about some of these (or other similarly-sourced ones) being biased (Stock & Cisse, 2017; Yang et al., 2020; Birhane et al., 2021; Paullada et al., 2021) and violating privacy (Prabhu & Birhane, 2020; Yang et al., 2022). • Our focus is on understanding the extent to which CLIP’s representations are influenced by what the captions they are trained on describe. However, we sidestep whether or not this is always desirable. After all, recent studies (Birhane et al., 2021) show that vision-linguistic datasets have various biases and stereotypes, which we might not want our models to learn. • In Section 4, we use large language models (in particular, GPT-J) to augment dataset captions via in-context learning. These models however are known to have their own limitations that might percolate into the generated captions. REPRODUCIBILITY STATEMENT Datasets: All the pre-training/transfer learning datasets we use are open-source. In the supplementary material, we include certain variants of the COCO/CC/YFCC datasets we created as CSV files: namely synthetic COCO captions, filtered CC/YFCC samples, and GPT-J paraphrased captions. Code and hyperparameters: We discuss implementation details including hyperparameter settings in Appendix A. We also include the code for training models in the supplementary material. ACKNOWLEDGEMENTS We are grateful to Niladri Chatterji, Elisa Kreiss, Nimit Sohoni and Dimitris Tsipras for helpful discussions. SS is supported by Open Philanthropy, YD by a Knights-Hennessy Scholarship, and RT by the NSF GRFP under Grant No. DGE 1656518. We also thank Stanford HAI for a Google Cloud credits grant.
1. What is the main contribution of the paper regarding vision language models and their performance comparison with SimCLR? 2. What are the strengths of the proposed approaches to improve VL training for more transferable representations? 3. What are the weaknesses and queries raised by the reviewer regarding the paper's claims and experiments? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies the transfer learning performance of vision language models (CLIP) in comparison with SimCLR. The primary motivation here is that SimCLR is essentially an image-image analogue of CLIP allowing for fair comparisons in performance while controlling for a variety of confounders. The paper claims three primary advantages of CLIP over VL. In terms of scaling, CLIP learns more transferrable representations when the data size increases. However, the required captions need to be descriptive. The authors leverage BLIP to score caption quality and show that datasets with higher caption quality lead to CLIP outperforming SimCLR as well as models trained with more low quality captions. The thirdly, the paper also provides experimental evidence that while variability in captions adversely affects VL training, this can be somewhat compensated by adding more data through text augmentations. Strengths And Weaknesses Strengths The paper is well motivated. It is of great importance to study the exact effects of vision-language training versus pure image based training. The idea of using SimCLR as a controlled baseline is inspired. The experiments have clear hypotheses and support the conclusions presented. Specifically, the use of multiple captions per image as text augmentations shows improvements over standard CLIP training, highlighting the importance of both scale and the variability of captions. The presented approaches to improve VL training for more transferable representations are simple and intuitive. Weaknesses/Queries The overall transfer accuracy reported for SimCLR is much lower than that reported in Chen et al's original work. The difference primarily seems to be an effect of the training dataset chosen (Imagenet in the original v/s COCO here). Could the authors comment on this? One suggestion could be to use Imagenet and Imagenet-captions (Fang et al.) to better understand the difference. Another improvement would be to study the effect of scaling up the architectures with more parameters (Resnet-2x, Resnet-4x) and perhaps with Transformer based architectures to understand if the conclusions hold across a variety of architectures/parameters. However, I understand that these studies are time-consuming, and the paper in its current form is a valuable study in itself. Clarity, Quality, Novelty And Reproducibility The paper is clearly written and complete with references and experimental details. In terms of quality, as stated above, the experiments are well structured, and do address several open questions (with previously contradicting results) in a clear, concise manner. The ideas of using SimCLR, and BLIP based captioning to study the various effects of text supervision are novel and provide insight into the behavior of such models. The results are reproducible with the provided code.
ICLR
Title Improved memory in recurrent neural networks with sequential non-normal dynamics Abstract Training recurrent neural networks (RNNs) is a hard problem due to degeneracies in the optimization landscape, a problem also known as vanishing/exploding gradients. Short of designing new RNN architectures, previous methods for dealing with this problem usually boil down to orthogonalization of the recurrent dynamics, either at initialization or during the entire training period. The basic motivation behind these methods is that orthogonal transformations are isometries of the Euclidean space, hence they preserve (Euclidean) norms and effectively deal with vanishing/exploding gradients. However, this ignores the crucial effects of non-linearity and noise. In the presence of a non-linearity, orthogonal transformations no longer preserve norms, suggesting that alternative transformations might be better suited to non-linear networks. Moreover, in the presence of noise, norm preservation itself ceases to be the ideal objective. A more sensible objective is maximizing the signalto-noise ratio (SNR) of the propagated signal instead. Previous work has shown that in the linear case, recurrent networks that maximize the SNR display strongly non-normal, sequential dynamics and orthogonal networks are highly suboptimal by this measure. Motivated by this finding, here we investigate the potential of non-normal RNNs, i.e. RNNs with a non-normal recurrent connectivity matrix, in sequential processing tasks. Our experimental results show that non-normal RNNs outperform their orthogonal counterparts in a diverse range of benchmarks. We also find evidence for increased non-normality and hidden chain-like feedforward motifs in trained RNNs initialized with orthogonal recurrent connectivity matrices. 1 INTRODUCTION Modeling long-term dependencies with recurrent neural networks (RNNs) is a hard problem due to degeneracies inherent in the optimization landscapes of these models, a problem also known as the vanishing/exploding gradients problem (Hochreiter, 1991; Bengio et al., 1994). One approach to addressing this problem has been designing new RNN architectures that are less prone to such difficulties, hence are better able to capture long-term dependencies in sequential data (Hochreiter & Schmidhuber, 1997; Cho et al., 2014; Chang et al., 2017; Bai et al., 2018). An alternative approach is to stick with the basic vanilla RNN architecture instead, but to constrain its dynamics in some way so as to eliminate or reduce the degeneracies that otherwise afflict the optimization landscape. Previous proposals belonging to this second category generally boil down to orthogonalization of the recurrent dynamics, either at initialization or during the entire training period (Le et al., 2015; Arjovsky et al., 2016; Wisdom et al., 2016). The basic idea behind these methods is that orthogonal transformations are isometries of the Euclidean space, hence they preserve distances and norms, which enables them to deal effectively with the vanishing/exploding gradients problem. However, this idea ignores the crucial effects of non-linearity and noise. Orthogonal transformations no longer preserve distances and norms in the presence of a non-linearity, suggesting that alternative transformations might be better suited to non-linear networks (this point was noted by Pennington et al. (2017) and Chen et al. (2018) before, where isometric initializations that take the non-linearity into account were proposed). Similarly, in the presence of noise, norm preservation itself ceases to be the ideal objective. One must instead maximize the signal-to-noise ratio (SNR) of the propagated signal. In neural networks, noise comes in both through the stochasticity of the stochastic gradient descent (SGD) algorithm and sometimes also through direct noise injection for regularization purposes, as in dropout (Srivastava et al., 2014). Previous work has shown that even in a simple linear setting, recurrent networks that maximize the SNR display strongly non-normal, sequential dynamics and orthogonal networks are highly suboptimal by this measure (Ganguli et al., 2008). Motivated by these observations, in this paper, we investigate the potential of non-normal RNNs, i.e. RNNs with a non-normal recurrent connectivity matrix, in sequential processing tasks. Recall that a normal matrix is a matrix with an orthonormal set of eigenvectors, whereas a non-normal matrix does not have an orthonormal set of eigenvectors. This property allows non-normal systems to display interesting transient behaviors that are not available in normal systems. This kind of transient behavior, specifically a particular kind of transient amplification of the signal in certain non-normal systems, underlies their superior memory properties (Ganguli et al., 2008), as will be discussed further below. Our empirical results show that non-normal vanilla RNNs significantly outperform their orthogonal counterparts in a diverse range of benchmarks.1 2 BACKGROUND 2.1 MEMORY IN LINEAR RECURRENT NETWORKS WITH NOISE Ganguli et al. (2008) studied memory properties of linear recurrent networks injected with a scalar temporal signal st, and noise zt: ht = Wht−1 + vst + zt (1) The noise is assumed to be i.i.d. with zt ∼ N (0, I). Ganguli et al. (2008) then analyzed the Fisher memory matrix (FMM) of this system, defined as: Jkl(s≤t) = 〈 − ∂ 2 ∂st−k∂st−l log p(ht|s≤t) 〉 p(ht|s≤t) (2) For linear networks with Gaussian noise, it is easy to show that Jkl(s≤t) is, in fact, independent of the past signal history s≤t. Ganguli et al. (2008) specifically analyzed the diagonal of the FMM: J(k) ≡ Jkk, which can be written explicitly as: J(k) = v>Wk>C−1Wkv (3) where C = ∑∞ k=0 W kWk> is the noise covariance matrix, and the norm of Wkv can be roughly thought of as representing the signal strength. The total Fisher memory is the sum of J(k) over all past time steps k: Jtot = ∞∑ k=0 J(k) (4) Intuitively, J(k) measures the information contained in the current state of the system, ht, about a signal that entered the system k time steps ago, st−k. Jtot is then a measure of the total information contained in the current state of the system about the entire past signal history, s≤t. The main result in Ganguli et al. (2008) shows that Jtot = 1 for all normal matrices W (including all orthogonal matrices), whereas in general Jtot ≤ N , where N is the network size. Remarkably, the memory upper bound can be achieved by certain highly non-normal systems and several examples are explicitly given in Ganguli et al. (2008). Two of those examples are illustrated in Figure 1a (right): a uni-directional “chain” network and a chain network with feedback. In the chain network, the recurrent connectivity is given by Wij = αδj,i−1 and in the chain with feedback network, it is given by Wij = αδj,i−1 + βδj,i+1, where α and β are the feedforward and feedback connection weights, respectively (here δ denotes the Kronecker delta function). In addition, in order to achieve optimal memory, the signal must be fed at the source neuron in these networks, i.e. v = [1, 0, 0, . . . , 0]>. Figure 1b compares the Fisher memory curves, J(k), of these non-normal networks with the Fisher memory curves of two example normal networks, namely recurrent networks with identity or random orthogonal connectivity matrices. The two non-normal networks have extensive memory capacity, i.e. Jtot ∼ O(N), whereas for the normal examples, Jtot = 1. The crucial property that enables extensive memory in non-normal networks is transient amplification: after the signal enters the network, it is amplified supralinearly for a time of length O(N) before it eventually dies out (Figure 1c). This kind of transient amplification is not possible in normal networks. 1Code available at: https://github.com/eminorhan/nonnormal-init 2.2 A TOY NON-LINEAR EXAMPLE: NON-LINEARITY AND NOISE INDUCE SIMILAR EFFECTS The preceding analysis by Ganguli et al. (2008) is exact in linear networks. Analysis becomes more difficult in the presence of a non-linearity. However, we now demonstrate that the non-normal networks shown in Figure 1a have advantages that extend beyond the linear case. The advantages in the non-linear case are due to reduced interference in these non-normal networks between signals entering the network at different time points in the past. To demonstrate this with a simple example, we will ignore the effect of noise for now and consider the effect of non-linearity on the linear decodability of past signals from the current network activity. We thus consider deterministic non-linear networks of the form (see Appendix A for additional details): ht = f(Wht−1 + vst) (5) and ask how well we can linearly decode a signal that entered the network k time steps ago, st−k, from the current activity of the network, ht. Figure 2c compares the decoding performance in a non-linear orthogonal network with the decoding performance in the non-linear chain network. Just as in the linear case with noise (Figure 2b), the chain network outperforms the orthogonal network. To understand intuitively why this is the case, consider a chain network with Wij = δj,i−1 and v = [1, 0, 0, . . . , 0]>. In this model, the responses of the N neurons after N time steps (at t = N ) are given by f(sN ), f(f(sN−1)), ..., f(f(. . . f(s1) . . .)), respectively, starting from the source neuron. Although the non-linearity f(·) makes perfect linear decoding of the past signal st−k impossible, one may still imagine being able to decode the past signal with reasonable accuracy as long as f(·) is not “too non-linear”. A similar intuition holds for the chain network with feedback as well, as long as the feedforward connection weight, α, is sufficiently stronger than the feedback connection strength, β. A condition like this must already be satisfied if the network is to maintain its optimal memory properties and also be dynamically stable at the same time (Ganguli et al., 2008). In normal networks, however, linear decoding is further degraded by interference from signals entering the network at different time points, in addition to the degradation caused by the nonlinearity. This is easiest to see in the identity network (a similar argument holds for the random orthogonal example too), where the responses of the neurons after N time steps are identically given by f(f(. . . f(f(s1)+s2) . . .)+sN ), if one assumes v = [1, 1, 1, . . . , 1]>. Linear decoding is harder in this case, because a signal st−k is both distorted by multiple steps of non-linearity and also mixed with signals entering at other time points. 3 RESULTS 3.1 EXPERIMENTS Because assuming an a priori fixed non-normal structure for an RNN runs the risk of being too restrictive, in this paper, we instead explore the promise of non-normal networks as initializers for RNNs. Throughout the paper, we will be primarily comparing the four RNN architectures schematically depicted in Figure 1a as initializers: two of them normal networks (identity and random orthogonal) and the other two non-normal networks (chain and chain with feedback), the last two being motivated by their optimal memory properties in the linear case, as reviewed above. 3.1.1 COPY, ADDITION, PERMUTED SEQUENTIAL MNIST Copy, addition, and permuted sequential MNIST tasks were commonly used as benchmarks in previous RNN studies (Arjovsky et al., 2016; Bai et al., 2018; Chang et al., 2017; Hochreiter & Schmidhuber, 1997; Le et al., 2015; Wisdom et al., 2016). We now briefly describe each of these tasks. Copy task: The input is a sequence of integers of length T . The first 10 integers in the sequence define the target subsequence that is to be copied and consist of integers between 1 and 8 (inclusive). The next T − 21 integers are set to 0. The integer after that is set to 9, which acts as the cue indicating that the model should start copying the target subsequence. The final 10 integers are set to 0. The output sequence that the model is trained to reproduce consists of T − 10 0s followed by the target subsequence from the input that is to be copied. To make sure that the task requires a sufficiently long memory capacity, we used a large sequence length, T = 500, comparable to the largest sequence length considered in Arjovsky et al. (2016) for the same task. Addition task: The input consists of two sequences of length T . The first one is a sequence of random numbers drawn uniformly from the interval [0, 1]. The second sequence is an indicator sequence with 1s at exactly two positions and 0s everywhere else. The positions of the two 1s indicate the positions of the numbers to be added in the first sequence. The target output is the sum of the two corresponding numbers. The position of the first 1 is drawn uniformly from the first half of the sequence and the position of the second 1 is drawn uniformly from the second half of the sequence. Again, to ensure that the task requires a sufficiently long memory capacity, we chose T = 750, which is the same as the largest sequence length considered in Arjovsky et al. (2016) for the same task. Permuted sequential MNIST (psMNIST): This is a sequential version of the standard MNIST benchmark where the pixels are fed to the model one pixel at a time. To make the task hard enough, we used the permuted version of the sequential MNIST task where a fixed random permutation is applied to the pixels to eliminate any spatial structure before they are fed into the model. We used vanilla RNNs with N = 25 recurrent units in the psMNIST task and N = 100 recurrent units in the copy and addition tasks. We used the elu nonlinearity for the copy and the psMNIST tasks (Clevert et al., 2016), and the relu nonlinearity for the addition problem (because relu proved to be more natural for remembering positive numbers). Batch size was 16 in all tasks. As mentioned above, the scaled identity and the scaled random orthogonal networks constituted the normal initializers. In the scaled identity initializer, the recurrent connectivity matrix was initialized as W = λI and the input matrix V was initialized as Vij ∼ N (0, 0.9/ √ N). In the random orthogonal initializer, the recurrent connectivity matrix was initialized as W = λQ, where Q is a random dense orthogonal matrix, and the input matrix V was initialized in the same way as in the identity initializer. The feedforward chain and the chain with feedback networks constituted our non-normal initializers. In the chain initializer, the recurrent connectivity matrix was initialized as Wij = αδj,i−1 and the input matrix V was initialized as V ∼ 0.9IN×d, where IN×d denotes the N ×d-dimensional identity matrix. Note that this choice of V is a natural generalization of the the source injecting input vector that was found to be optimal in the linear case with scalar signals to multi-dimensional inputs (as long as N d). In the chain with feedback initializer, the recurrent connectivity matrix was initialized as Wij = 0.99δj,i−1 + βδj,i+1 and the input matrix V was initialized in the same way as in the chain initializer. We used the rmsprop optimizer for all models, which we found to be the best method for this set of tasks. The learning rate of the optimizer was a hyperparameter which we tuned separately for each model and each task. The following learning rates were considered in the hyper-parameter search: 8×10−4, 5×10−4, 3×10−4, 10−4, 8×10−5, 5×10−5, 3×10−5, 10−5, 8×10−6, 5×10−6, 3×10−6. We ran each model on each task 6 times using the integers from 1 to 6 as random seeds. In addition, the following model-specific hyperparameters were searched over for each task: Chain: feedforward connection weight, α ∈ {0.99, 1.00, 1.01, 1.02, 1.03, 1.04, 1.05}. Chain with feedback: feedback connection weight, β ∈ {0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07}. Scaled identity: scale, λ ∈ {0.01, 0.96, 0.99, 1.0, 1.01, 1.02, 1.03, 1.04, 1.05}. Random orthogonal: scale, λ ∈ {0.01, 0.96, 0.99, 1.0, 1.01, 1.02, 1.03, 1.04, 1.05}. This yields a total of 7× 11× 6 = 462 different runs for each experiment in the non-normal models and a total of 9× 11× 6 = 594 different runs in the normal models. Note that we ran more extensive hyper-parameter searches for the normal models than for the non-normal models in this set of tasks. Figure 3a-c shows the validation losses for each model with the best hyper-parameter settings. The non-normal initializers generally outperform the normal initializers. Figure 3d-f shows for each model the number of “successful” runs that converged to a validation loss below a criterion level (which we set to be 50% of the loss for a baseline random model). The chain model outperformed all other models by this measure (despite having a smaller total number of runs than the normal models). In the copy task, for example, none of the runs for the normal models was able to achieve the criterion level, whereas 46 out of 462 runs for the chain model and 11 out of 462 runs for the feedback chain model reached the criterion loss (see Appendices B & C for further results and discussion). 3.1.2 LANGUAGE MODELING EXPERIMENTS To investigate if the benefits of nonnormal initializers extend to more realistic problems, we conducted experiments with three standard language modeling tasks: word-level Penn Treebank (PTB), character-level PTB, and character-level enwik8 benchmarks. For the language modeling experiments in this subsection, we used the code base provided by Salesforce Research (Merity et al., 2018a;b): https://github.com/ salesforce/awd-lstm-lm. We refer the reader to Merity et al. (2018a;b) for a more detailed description of the benchmarks. For the experiments in this subsection, we generally preserved the model setup used in Merity et al. (2018a;b), except for the following differences: 1) We replaced the gated RNN architectures (LSTMs and QRNNs) used in Merity et al. (2018a;b) with vanilla RNNs; 2) We observed that vanilla RNNs require weaker regularization than gated RNN architectures. Therefore, in the word-level PTB task, we set all dropout rates to 0.1. In the character-level PTB task, all dropout rates except dropoute were set to 0.1, which was set to 0. In the enwik8 benchmark, all dropout rates were set to 0; 3) We trained the word-level PTB models for 60 epochs, the character-level PTB models for 500 epochs and the enwik8 models for 35 epochs. We compared the same four models described in the previous subsection. As in Merity et al. (2018a), we used the Adam optimizer and thus only optimized the α, β, λ hyper-parameters for the experiments in this subsection. For the hyper-parameter α in the chain model and the hyper-parameter λ in the scaled identity and random orthogonal models, we searched over 21 values uniformly spaced between 0.05 and 1.05 (inclusive); whereas for the chain with feedback model, we set the feedforward connection weight, α, to the optimal value it had in the chain model and searched over 21 β values uniformly spaced between 0.01 and 0.21 (inclusive). In addition, we repeated each experiment 3 times using different random seeds, yielding a total of 63 runs for each model and each benchmark. The results are shown in Figure 4 and in Table 1. Figure 4 shows the validation loss over the course of training in units of bits per character (bpc). Table 1 reports the test losses at the end of training. The non-normal models outperform the normal models on the word-level and character-level PTB benchmarks. The differences between the models are less clear on the enwik8 benchmark. However, in terms of the test loss, the non-normal feedback chain model outperforms the other models on all three benchmarks (Table 1). We note that the vanilla RNN models perform significantly worse than the gated RNN architectures considered in Merity et al. (2018a;b). We conjecture that this is because gated architectures are generally better at modeling contextual dependencies, hence they have inductive biases better suited to language modeling tasks. The primary benefit of non-normal dynamics, on the other hand, is enabling a longer memory capacity. Below, we will discuss whether non-normal dynamics can be used in gated RNN architectures to improve performance as well. 3.2 HIDDEN FEEDFORWARD STRUCTURES IN TRAINED RNNS We observed that training made vanilla RNNs initialized with orthogonal recurrent connectivity matrices non-normal. We quantified the non-normality of the trained recurrent connectivity matrices using a measure introduced by Henrici (1962): d(W) ≡ √ ‖W‖2F − ∑ i |λi|2, where ‖ · ‖F denotes the Frobenius norm and λi is the i-th eigenvalue of W. This measure equals 0 for all normal matrices and is positive for non-normal matrices. We found that d(W) became positive for all successfully trained RNNs initialized with orthogonal recurrent connectivity matrices. Table 2 reports the aggregate statistics of d(W) for orthogonally initialized RNNs trained on the toy benchmarks. Although increased non-normality in trained RNNs is an interesting observation, the Henrici index, by itself, does not tell us what structural features in trained RNNs contribute to this increased non-normality. Given the benefits of chain-like feedforward non-normal structures in RNNs for improved memory, we hypothesized that training might have installed hidden chain-like feedforward structures in trained RNNs and that these feedforward structures were responsible for their increased non-normality. To uncover these hidden feedforward structures, we performed an analysis suggested by Rajan et al. (2016). In this analysis, we first injected a unit pulse of input to the network at the beginning of the trial and let the network evolve for 100 time steps afterwards according to its recurrent dynamics with no direct input. We then ordered the recurrent units by the time of their peak activity (using a small amount of jitter to break potential ties between units) and plotted the mean recurrent connection weights, Wij , as a function of the order difference between two units, i− j. Positive i− j values correspond to connections from earlier peaking units to later peaking units, and vice versa for negative i− j values. In trained RNNs, the mean recurrent weight profile as a function of i− j had an asymmetric peak, with connections in the “forward” direction being, on average, stronger than those in the opposite direction. Figure 5 shows examples with orthogonally initialized RNNs trained on the addition and the permuted sequential MNIST tasks. Note that for a purely feedforward chain, the weight profile would have a single peak at i− j = 1 and would be zero elsewhere. Although the weight profiles for trained RNNs are not this extreme, the prominent asymmetric bump with a peak at a positive i− j value indicates a hidden chain-like feedforward structure in these networks. 3.3 DO BENEFITS OF NON-NORMAL DYNAMICS EXTEND TO GATED RNN ARCHITECTURES? So far, we have only considered vanilla RNNs. An important question is whether the benefits of non-normal dynamics demonstrated above for vanilla RNNs also extend to gated RNN architectures like LSTMs or GRUs (Hochreiter & Schmidhuber, 1997; Cho et al., 2014). Gated RNN architectures have better inductive biases than vanilla RNNs in many practical tasks of interest such as language modeling (e.g. see Table 1 for a comparison of vanilla RNN architectures with an LSTM architecture of similar size in the language modeling benchmarks), thus it would be practically very useful if their performance could be improved through an inductive bias for non-normal dynamics. To address this question, we treated the input, forget, output, and update gates of the LSTM architecture as analogous to vanilla RNNs and initialized the recurrent and input matrices inside these gates in the same way as in the chain or the orthogonal initialization of vanilla RNNs above. We also compared these with a more standard initialization scheme where all the weights were drawn from a uniform distribution U(− √ k, √ k) where k is the reciprocal of the hidden layer size (labeled plain in Table 3). This is the default initializer for the LSTM weight matrices in PyTorch: https://pytorch.org/docs/stable/nn.html#lstm. We compared these initializers in the language modeling benchmarks. The chain initializer did not perform better than the orthogonal initializer (Table 3), suggesting that non-normal dynamics in gated RNN architectures may not be as helpful as it is in vanilla RNNs. In hindsight, this is not too surprising, because our initial motivation for introducing non-normal dynamics heavily relied on the vanilla RNN architecture and gated RNNs can be dynamically very different from vanilla RNNs. When we looked at the trained LSTM weight matrices more closely, we found that, although still non-normal, the recurrent weight matrices inside the input, forget, and output gates (i.e. the sigmoid gates) did not have the same signatures of hidden chain-like feedforward structures observed in vanilla RNNs. Specifically, the weight profiles in the LSTM recurrent weight matrices inside these three gates did not display the asymmetric bump characteristic of a prominent chain-like feedforward structure, but were instead approximately monotonic functions of i − j (Figure 6a-c), suggesting a qualitatively different kind of dynamics where the individual units are more persistent over time. The recurrent weight matrix inside the update gate (the tanh gate), on the other hand, did display the signature of a hidden chain-like feedforward structure (Figure 6d). When we incorporated these two structures in different gates of the LSTMs, by using a chain initializer for the update gate and a monotonically increasing recurrent weight profile for the other gates (labeled mixed in Table 3), the resulting initializer outperformed the other initializers on character-level PTB and enwik8 tasks. 4 DISCUSSION Motivated by their optimal memory properties in a simplified linear setting (Ganguli et al., 2008), in this paper, we investigated the potential benefits of certain highly non-normal chain-like RNN architectures in capturing long-term dependencies in sequential tasks. Our results demonstrate an advantage for such non-normal architectures as initializers for vanilla RNNs, compared to the commonly used orthogonal initializers. We further found evidence for the induction of such chainlike feedforward structures in trained vanilla RNNs even when these RNNs were initialized with orthogonal recurrent connectivity matrices. The benefits of these chain-like non-normal initializers do not directly carry over to more complex, gated RNN architectures such as LSTMs and GRUs. In some important practical problems such as language modeling, the gains from using these kinds of gated architectures seem to far outweigh the gains obtained from the non-normal initializers in vanilla RNNs (see Table 1). However, we also uncovered important regularities in trained LSTM weight matrices, namely that the recurrent weight profiles of the input, forget, and output gates (the sigmoid gates) in trained LSTMs display a monotonically increasing pattern, whereas the recurrent matrix inside the update gate (the tanh gate) displays a chain-like feedforward structure similar to that observed in vanilla RNNs (Figure 6). We showed that these regularities can be exploited to improve the training and/or generalization performance of gated RNN architectures by introducing them as useful inductive biases. A concurrent work to ours also emphasized the importance of non-normal dynamics in RNNs (Kerg et al., 2019). The main difference between Kerg et al. (2019) and our work is that we explicitly introduce sequential motifs in RNNs at initialization as a useful inductive bias for improved long-term memory (motivated by the optimal memory properties of these motifs in simpler cases), whereas their approach does not constrain the shape of the non-normal part of the recurrent connectivity matrix, hence does not utilize sequential non-normal dynamics as an inductive bias. In some of their tasks, Kerg et al. (2019) also uncovered a feedforward, chain-like motif in trained vanilla RNNs similar to the one reported in this paper (Figure 5). There is a close connection between the identity initialization of RNNs (Le et al., 2015) and the widely used identity skip connections (or residual connections) in deep feedforward networks (He et al., 2016). Given the superior performance of chain-like non-normal initializers over the identity initialization demonstrated in the context of vanilla RNNs in this paper, it could be interesting to look for similar chain-like non-normal architectural motifs that could be used in deep feedforward networks in place of the identity skip connections. A DETAILS AND EXTENSIONS OF THE LINEAR DECODING EXPERIMENTS This appendix contains the details of the linear decoding experiments in section 2.2 and reports the results of additional linear decoding experiments. The experiments in section 2.2 compare the signal propagation properties of vanilla RNNs with either random orthogonal or chain connectivity matrices. In both cases, the overall scale of the recurrent connectivity matrices is set to 1.01. The input weight vector is v = [1, 0, 0, . . . , 0]> for the chain model and vi ∼ N (0, 1/ √ n) for the random orthogonal model (thus the overall scales of both the feedforward and the recurrent inputs are identical in the two models). The RNNs themselves are not trained in these experiments. At each time point, an i.i.d. random scalar signal st ∼ N (0, 1) is fed into the network as input (Equation 5). We simulate 250 trials for each model and ask how well we can linearly decode the signal at the first time step, s1, from the recurrent activities at time step 100, h100. We do this by linearly regressing s1 on h100 (using the 250 simulated samples) and report the R2 value for the linear regression in Figure 2. In simulations with noise (Figure 2b), an additional i.i.d. random noise term, zit ∼ N (0, σ), is added to each recurrent neuron i at each time step t. The standard deviation of the noise, σ, is set to 0.1 in the experiments shown in Figure 2b. To show that the results are not sensitive to the noise scale, we ran additional experiments with lower (σ = 0.01) and higher (σ = 1) levels of noise (Figure 7). In both cases, the chain network still outperforms the orthogonal network. Note that these “linear + noise” experiments satisfy the conditions of the analytical theory in Ganguli et al. (2008), so these results are as expected from the theory. As mentioned in the main text, the “non-linear + no noise” experiments reported in Figure 2c used the elu non-linearity. To show that the results are not sensitive to the choice of the non-linearity, we also ran additional experiments with tanh and relu non-linearities (Figure 8). As with the elu non-linearity, the chain network outperforms the orthogonal network with the tanh and relu non-linearities as well, suggesting that the results are not sensitive to the choice of the non-linearity. B THE EFFECT OF THE FEEDBACK STRENGTH PARAMETER (β) IN THE CHAIN WITH FEEDBACK MODEL In this appendix, we consider the effect of the feedback strength parameter, β, for the chain with feedback model in the context of the experiments reported in section 3.1.1. We focus on the psMNIST task specifically, because this is the only task where the feedback chain model converges to a low loss solution for a sufficiently large number of hyper-parameter configurations. For the addition and copy tasks, there are not enough successful hyper-parameter configurations to draw reliable inferences about the effect of β (see Figure 3d-f). Figure 9 shows the validation loss at the end of training as a function of β in the psMNIST task. In this figure, we considered all networks that achieved a validation loss lower than the random baseline model (i.e. < log(10) ≈ 2.3) at the end of training (an overwhelming majority of the networks satisfied this criterion). Figure 9 shows that the final validation loss is a monotonically increasing function of β in this task, suggesting that large feedback strengths are harmful for the model performance. C COMPARISON WITH PREVIOUS MODELS In this appendix, we compare our results with those obtained by previous models, focusing specifically on the experiments in section 3.1.1 (because the tasks in this section are commonly used as RNN benchmarks). uRNN: We first note that our copy and addition tasks use the largest sequence lengths considered in Arjovsky et al. (2016) for the same tasks (T = 500 for the copy task and T = 750 for the addition task). Hence our results are directly comparable to those reported in Arjovsky et al. (2016) (the random baselines shown by the dashed lines in Figure 3a-b are identical to those in Arjovsky et al. (2016) for the same conditions). The unitary evolution RNN (uRNN) model proposed in Arjovsky et al. (2016) comfortably learns the copy-500 task (with 128 recurrent units), quickly reaching a near-zero loss (see their Figure 1, bottom right); however, it struggles with the addition task, barely reaching the half-baseline criterion even with 512 recurrent units (see their Figure 2, bottom right). This difference in the behavior of the uRNN model in the copy and addition tasks is predicted by Henaff et al. (2016), where it is shown that random orthogonal and near-identity recurrent connectivity matrices have much better inductive biases in the copy and addition tasks, respectively. Because of its parametrization, uRNN behaves more similarly to a random orthogonal RNN than a near-identity RNN. In contrast, our non-normal RNNs, especially the chain model, comfortably clear the half-baseline criterion both in copy-500 and addition-750 tasks (with 100 recurrent units), quickly achieving very small loss values in both tasks with the optimal hyper-parameter configurations (Figure 3a-b). Note that this is despite the fact that our models use fewer recurrent units than the uRNN model in Arjovsky et al. (2016) (100 vs. 128 or 512 recurrent units). nnRNN: Kerg et al. (2019) report results for the copy (T = 200) and psMNIST tasks only. They have not reported training success for longer variants of the copy task (specifically for T = 500). Kerg et al. (2019) also have not reported successful training in the addition task, whereas our non-normal RNNs showed training success both in copy-500 and addition-750 tasks (Figure 3a-b). We conclude that our non-normal initializers for vanilla RNNs perform comparably to, or better than, the uRNN and nnRNN models in standard long-term memory benchmarks. One of the biggest strengths of our proposal compared to these previous models is its much greater simplicity. Both uRNN and nnRNN require a complete re-parametrization of the vanilla RNN model (nnRNN even requires a novel optimization method). Our method, on the other hand, proposes much simpler, easy-to-implement, plug-and-play type sequential initializers that keep the standard parametrization of RNNs intact. critical RNN: Chen et al. (2018) note that the conditions for dynamical isometry in vanilla RNNs are identical to those in fully-connected feed-forward networks studied in Pennington et al. (2017). Pennington et al. (2017), in turn, note that dynamical isometry is not achievable exactly in networks with relu activation, but it is achievable in networks with tanh activation, where it essentially boils down to initializing the weights to small values. Pennington et al. (2017) give a specific example of a dynamically isometric tanh network (with n = 400, σw = 1.05, and σb = 2.01× 10−5). We set up a similar tanh RNN model, but were not able to train it successfully in the copy or addition tasks. Again, as with the nnRNN results, this shows the challenging nature of these two tasks and suggests that dynamical isometry may not be enough for successful training in these tasks. A possible reason for this is that although critical initialization takes the non-linearity into account, it still does not take the noise into account (i.e. it is not guaranteed to maximize the SNR). LSTM, tanh RNN: Consistent with the results in Arjovsky et al. (2016), we were not able to successfully train LSTMs or vanilla RNNs with tanh non-linearity in the challenging copy-500 and addition-750 tasks. Therefore, these models were not included as baselines in section 3.1.1.
1. What is the main contribution of the paper regarding nonnormal matrix initialization in RNNs? 2. What are the strengths and weaknesses of the proposed approach compared to prior works? 3. How does the reviewer assess the clarity and organization of the paper's structure? 4. What are the questions raised by the reviewer regarding the experimental results and comparisons with other works? 5. How does the reviewer evaluate the significance and novelty of the proposed method in the context of existing research?
Review
Review Contributions: This paper proposes to explore nonnormal matrix initialization in RNNs. Authors demonstrate on various tasks (Copy/Addition, Permuted-SMNIST, PTB, enwik8) that chain-like nonnormal matrix initializations can outperform orthogonal or identity initialization in vanilla RNNs. However, nonnormal RNNs underperform their gated counterpart such as LSTM. Authors also show results where they use their initialization scheme in update gate of a LSTM. Comments: The paper is well written and pleasant to read. The paper structure could be a bit improved. For instance, section 2 is named “Results” while 2.1 which take significant part of the section is about some prior results in (Ganguli et al. 2018). It would be better to have it under an explicit prior work section. The description of the experiments reported in Figure 2. is a bit vague: what is the training/evaluation data?, do you train all the model parameters or only the linear layer?, what is the type of noise used? It is unclear to me how robust is the observation made in Figure-2. Do you see similar behavior with different noise-scale and other non-linearity such as tanh? The experimental section provides convincing data showing that non-normal initialization schemes outperform orthogonal and identity initialization in vanilla RNN. However, it would be nice to add some comparisons with prior works. It is unclear how the current method compare with nn-RNN of (Kerg et al. 2019) and the unitary-RNNs. Why the score reported for the 3-LSTM in Table 3. is underperforming 3-layer LSTM baseline used in (Merity et al., 2018), reported in Table 1.? In addition, did you try saturating non-linearities for the RNN experiments? Overall, I think the method is promising, but comparison with prior work is missing. I would encourage the authors to compare their approach with unitary-RNN, and nn-RNN to better demonstrate the significance of their works. Additional remarks: - SNR could be defined more precisely in the introduction. In particular, the introduction states that the stochasticity of SGD is a source of noise which is true. But the model presented in section 2 seems to focus mostly on input noise?
ICLR
Title Improved memory in recurrent neural networks with sequential non-normal dynamics Abstract Training recurrent neural networks (RNNs) is a hard problem due to degeneracies in the optimization landscape, a problem also known as vanishing/exploding gradients. Short of designing new RNN architectures, previous methods for dealing with this problem usually boil down to orthogonalization of the recurrent dynamics, either at initialization or during the entire training period. The basic motivation behind these methods is that orthogonal transformations are isometries of the Euclidean space, hence they preserve (Euclidean) norms and effectively deal with vanishing/exploding gradients. However, this ignores the crucial effects of non-linearity and noise. In the presence of a non-linearity, orthogonal transformations no longer preserve norms, suggesting that alternative transformations might be better suited to non-linear networks. Moreover, in the presence of noise, norm preservation itself ceases to be the ideal objective. A more sensible objective is maximizing the signalto-noise ratio (SNR) of the propagated signal instead. Previous work has shown that in the linear case, recurrent networks that maximize the SNR display strongly non-normal, sequential dynamics and orthogonal networks are highly suboptimal by this measure. Motivated by this finding, here we investigate the potential of non-normal RNNs, i.e. RNNs with a non-normal recurrent connectivity matrix, in sequential processing tasks. Our experimental results show that non-normal RNNs outperform their orthogonal counterparts in a diverse range of benchmarks. We also find evidence for increased non-normality and hidden chain-like feedforward motifs in trained RNNs initialized with orthogonal recurrent connectivity matrices. 1 INTRODUCTION Modeling long-term dependencies with recurrent neural networks (RNNs) is a hard problem due to degeneracies inherent in the optimization landscapes of these models, a problem also known as the vanishing/exploding gradients problem (Hochreiter, 1991; Bengio et al., 1994). One approach to addressing this problem has been designing new RNN architectures that are less prone to such difficulties, hence are better able to capture long-term dependencies in sequential data (Hochreiter & Schmidhuber, 1997; Cho et al., 2014; Chang et al., 2017; Bai et al., 2018). An alternative approach is to stick with the basic vanilla RNN architecture instead, but to constrain its dynamics in some way so as to eliminate or reduce the degeneracies that otherwise afflict the optimization landscape. Previous proposals belonging to this second category generally boil down to orthogonalization of the recurrent dynamics, either at initialization or during the entire training period (Le et al., 2015; Arjovsky et al., 2016; Wisdom et al., 2016). The basic idea behind these methods is that orthogonal transformations are isometries of the Euclidean space, hence they preserve distances and norms, which enables them to deal effectively with the vanishing/exploding gradients problem. However, this idea ignores the crucial effects of non-linearity and noise. Orthogonal transformations no longer preserve distances and norms in the presence of a non-linearity, suggesting that alternative transformations might be better suited to non-linear networks (this point was noted by Pennington et al. (2017) and Chen et al. (2018) before, where isometric initializations that take the non-linearity into account were proposed). Similarly, in the presence of noise, norm preservation itself ceases to be the ideal objective. One must instead maximize the signal-to-noise ratio (SNR) of the propagated signal. In neural networks, noise comes in both through the stochasticity of the stochastic gradient descent (SGD) algorithm and sometimes also through direct noise injection for regularization purposes, as in dropout (Srivastava et al., 2014). Previous work has shown that even in a simple linear setting, recurrent networks that maximize the SNR display strongly non-normal, sequential dynamics and orthogonal networks are highly suboptimal by this measure (Ganguli et al., 2008). Motivated by these observations, in this paper, we investigate the potential of non-normal RNNs, i.e. RNNs with a non-normal recurrent connectivity matrix, in sequential processing tasks. Recall that a normal matrix is a matrix with an orthonormal set of eigenvectors, whereas a non-normal matrix does not have an orthonormal set of eigenvectors. This property allows non-normal systems to display interesting transient behaviors that are not available in normal systems. This kind of transient behavior, specifically a particular kind of transient amplification of the signal in certain non-normal systems, underlies their superior memory properties (Ganguli et al., 2008), as will be discussed further below. Our empirical results show that non-normal vanilla RNNs significantly outperform their orthogonal counterparts in a diverse range of benchmarks.1 2 BACKGROUND 2.1 MEMORY IN LINEAR RECURRENT NETWORKS WITH NOISE Ganguli et al. (2008) studied memory properties of linear recurrent networks injected with a scalar temporal signal st, and noise zt: ht = Wht−1 + vst + zt (1) The noise is assumed to be i.i.d. with zt ∼ N (0, I). Ganguli et al. (2008) then analyzed the Fisher memory matrix (FMM) of this system, defined as: Jkl(s≤t) = 〈 − ∂ 2 ∂st−k∂st−l log p(ht|s≤t) 〉 p(ht|s≤t) (2) For linear networks with Gaussian noise, it is easy to show that Jkl(s≤t) is, in fact, independent of the past signal history s≤t. Ganguli et al. (2008) specifically analyzed the diagonal of the FMM: J(k) ≡ Jkk, which can be written explicitly as: J(k) = v>Wk>C−1Wkv (3) where C = ∑∞ k=0 W kWk> is the noise covariance matrix, and the norm of Wkv can be roughly thought of as representing the signal strength. The total Fisher memory is the sum of J(k) over all past time steps k: Jtot = ∞∑ k=0 J(k) (4) Intuitively, J(k) measures the information contained in the current state of the system, ht, about a signal that entered the system k time steps ago, st−k. Jtot is then a measure of the total information contained in the current state of the system about the entire past signal history, s≤t. The main result in Ganguli et al. (2008) shows that Jtot = 1 for all normal matrices W (including all orthogonal matrices), whereas in general Jtot ≤ N , where N is the network size. Remarkably, the memory upper bound can be achieved by certain highly non-normal systems and several examples are explicitly given in Ganguli et al. (2008). Two of those examples are illustrated in Figure 1a (right): a uni-directional “chain” network and a chain network with feedback. In the chain network, the recurrent connectivity is given by Wij = αδj,i−1 and in the chain with feedback network, it is given by Wij = αδj,i−1 + βδj,i+1, where α and β are the feedforward and feedback connection weights, respectively (here δ denotes the Kronecker delta function). In addition, in order to achieve optimal memory, the signal must be fed at the source neuron in these networks, i.e. v = [1, 0, 0, . . . , 0]>. Figure 1b compares the Fisher memory curves, J(k), of these non-normal networks with the Fisher memory curves of two example normal networks, namely recurrent networks with identity or random orthogonal connectivity matrices. The two non-normal networks have extensive memory capacity, i.e. Jtot ∼ O(N), whereas for the normal examples, Jtot = 1. The crucial property that enables extensive memory in non-normal networks is transient amplification: after the signal enters the network, it is amplified supralinearly for a time of length O(N) before it eventually dies out (Figure 1c). This kind of transient amplification is not possible in normal networks. 1Code available at: https://github.com/eminorhan/nonnormal-init 2.2 A TOY NON-LINEAR EXAMPLE: NON-LINEARITY AND NOISE INDUCE SIMILAR EFFECTS The preceding analysis by Ganguli et al. (2008) is exact in linear networks. Analysis becomes more difficult in the presence of a non-linearity. However, we now demonstrate that the non-normal networks shown in Figure 1a have advantages that extend beyond the linear case. The advantages in the non-linear case are due to reduced interference in these non-normal networks between signals entering the network at different time points in the past. To demonstrate this with a simple example, we will ignore the effect of noise for now and consider the effect of non-linearity on the linear decodability of past signals from the current network activity. We thus consider deterministic non-linear networks of the form (see Appendix A for additional details): ht = f(Wht−1 + vst) (5) and ask how well we can linearly decode a signal that entered the network k time steps ago, st−k, from the current activity of the network, ht. Figure 2c compares the decoding performance in a non-linear orthogonal network with the decoding performance in the non-linear chain network. Just as in the linear case with noise (Figure 2b), the chain network outperforms the orthogonal network. To understand intuitively why this is the case, consider a chain network with Wij = δj,i−1 and v = [1, 0, 0, . . . , 0]>. In this model, the responses of the N neurons after N time steps (at t = N ) are given by f(sN ), f(f(sN−1)), ..., f(f(. . . f(s1) . . .)), respectively, starting from the source neuron. Although the non-linearity f(·) makes perfect linear decoding of the past signal st−k impossible, one may still imagine being able to decode the past signal with reasonable accuracy as long as f(·) is not “too non-linear”. A similar intuition holds for the chain network with feedback as well, as long as the feedforward connection weight, α, is sufficiently stronger than the feedback connection strength, β. A condition like this must already be satisfied if the network is to maintain its optimal memory properties and also be dynamically stable at the same time (Ganguli et al., 2008). In normal networks, however, linear decoding is further degraded by interference from signals entering the network at different time points, in addition to the degradation caused by the nonlinearity. This is easiest to see in the identity network (a similar argument holds for the random orthogonal example too), where the responses of the neurons after N time steps are identically given by f(f(. . . f(f(s1)+s2) . . .)+sN ), if one assumes v = [1, 1, 1, . . . , 1]>. Linear decoding is harder in this case, because a signal st−k is both distorted by multiple steps of non-linearity and also mixed with signals entering at other time points. 3 RESULTS 3.1 EXPERIMENTS Because assuming an a priori fixed non-normal structure for an RNN runs the risk of being too restrictive, in this paper, we instead explore the promise of non-normal networks as initializers for RNNs. Throughout the paper, we will be primarily comparing the four RNN architectures schematically depicted in Figure 1a as initializers: two of them normal networks (identity and random orthogonal) and the other two non-normal networks (chain and chain with feedback), the last two being motivated by their optimal memory properties in the linear case, as reviewed above. 3.1.1 COPY, ADDITION, PERMUTED SEQUENTIAL MNIST Copy, addition, and permuted sequential MNIST tasks were commonly used as benchmarks in previous RNN studies (Arjovsky et al., 2016; Bai et al., 2018; Chang et al., 2017; Hochreiter & Schmidhuber, 1997; Le et al., 2015; Wisdom et al., 2016). We now briefly describe each of these tasks. Copy task: The input is a sequence of integers of length T . The first 10 integers in the sequence define the target subsequence that is to be copied and consist of integers between 1 and 8 (inclusive). The next T − 21 integers are set to 0. The integer after that is set to 9, which acts as the cue indicating that the model should start copying the target subsequence. The final 10 integers are set to 0. The output sequence that the model is trained to reproduce consists of T − 10 0s followed by the target subsequence from the input that is to be copied. To make sure that the task requires a sufficiently long memory capacity, we used a large sequence length, T = 500, comparable to the largest sequence length considered in Arjovsky et al. (2016) for the same task. Addition task: The input consists of two sequences of length T . The first one is a sequence of random numbers drawn uniformly from the interval [0, 1]. The second sequence is an indicator sequence with 1s at exactly two positions and 0s everywhere else. The positions of the two 1s indicate the positions of the numbers to be added in the first sequence. The target output is the sum of the two corresponding numbers. The position of the first 1 is drawn uniformly from the first half of the sequence and the position of the second 1 is drawn uniformly from the second half of the sequence. Again, to ensure that the task requires a sufficiently long memory capacity, we chose T = 750, which is the same as the largest sequence length considered in Arjovsky et al. (2016) for the same task. Permuted sequential MNIST (psMNIST): This is a sequential version of the standard MNIST benchmark where the pixels are fed to the model one pixel at a time. To make the task hard enough, we used the permuted version of the sequential MNIST task where a fixed random permutation is applied to the pixels to eliminate any spatial structure before they are fed into the model. We used vanilla RNNs with N = 25 recurrent units in the psMNIST task and N = 100 recurrent units in the copy and addition tasks. We used the elu nonlinearity for the copy and the psMNIST tasks (Clevert et al., 2016), and the relu nonlinearity for the addition problem (because relu proved to be more natural for remembering positive numbers). Batch size was 16 in all tasks. As mentioned above, the scaled identity and the scaled random orthogonal networks constituted the normal initializers. In the scaled identity initializer, the recurrent connectivity matrix was initialized as W = λI and the input matrix V was initialized as Vij ∼ N (0, 0.9/ √ N). In the random orthogonal initializer, the recurrent connectivity matrix was initialized as W = λQ, where Q is a random dense orthogonal matrix, and the input matrix V was initialized in the same way as in the identity initializer. The feedforward chain and the chain with feedback networks constituted our non-normal initializers. In the chain initializer, the recurrent connectivity matrix was initialized as Wij = αδj,i−1 and the input matrix V was initialized as V ∼ 0.9IN×d, where IN×d denotes the N ×d-dimensional identity matrix. Note that this choice of V is a natural generalization of the the source injecting input vector that was found to be optimal in the linear case with scalar signals to multi-dimensional inputs (as long as N d). In the chain with feedback initializer, the recurrent connectivity matrix was initialized as Wij = 0.99δj,i−1 + βδj,i+1 and the input matrix V was initialized in the same way as in the chain initializer. We used the rmsprop optimizer for all models, which we found to be the best method for this set of tasks. The learning rate of the optimizer was a hyperparameter which we tuned separately for each model and each task. The following learning rates were considered in the hyper-parameter search: 8×10−4, 5×10−4, 3×10−4, 10−4, 8×10−5, 5×10−5, 3×10−5, 10−5, 8×10−6, 5×10−6, 3×10−6. We ran each model on each task 6 times using the integers from 1 to 6 as random seeds. In addition, the following model-specific hyperparameters were searched over for each task: Chain: feedforward connection weight, α ∈ {0.99, 1.00, 1.01, 1.02, 1.03, 1.04, 1.05}. Chain with feedback: feedback connection weight, β ∈ {0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07}. Scaled identity: scale, λ ∈ {0.01, 0.96, 0.99, 1.0, 1.01, 1.02, 1.03, 1.04, 1.05}. Random orthogonal: scale, λ ∈ {0.01, 0.96, 0.99, 1.0, 1.01, 1.02, 1.03, 1.04, 1.05}. This yields a total of 7× 11× 6 = 462 different runs for each experiment in the non-normal models and a total of 9× 11× 6 = 594 different runs in the normal models. Note that we ran more extensive hyper-parameter searches for the normal models than for the non-normal models in this set of tasks. Figure 3a-c shows the validation losses for each model with the best hyper-parameter settings. The non-normal initializers generally outperform the normal initializers. Figure 3d-f shows for each model the number of “successful” runs that converged to a validation loss below a criterion level (which we set to be 50% of the loss for a baseline random model). The chain model outperformed all other models by this measure (despite having a smaller total number of runs than the normal models). In the copy task, for example, none of the runs for the normal models was able to achieve the criterion level, whereas 46 out of 462 runs for the chain model and 11 out of 462 runs for the feedback chain model reached the criterion loss (see Appendices B & C for further results and discussion). 3.1.2 LANGUAGE MODELING EXPERIMENTS To investigate if the benefits of nonnormal initializers extend to more realistic problems, we conducted experiments with three standard language modeling tasks: word-level Penn Treebank (PTB), character-level PTB, and character-level enwik8 benchmarks. For the language modeling experiments in this subsection, we used the code base provided by Salesforce Research (Merity et al., 2018a;b): https://github.com/ salesforce/awd-lstm-lm. We refer the reader to Merity et al. (2018a;b) for a more detailed description of the benchmarks. For the experiments in this subsection, we generally preserved the model setup used in Merity et al. (2018a;b), except for the following differences: 1) We replaced the gated RNN architectures (LSTMs and QRNNs) used in Merity et al. (2018a;b) with vanilla RNNs; 2) We observed that vanilla RNNs require weaker regularization than gated RNN architectures. Therefore, in the word-level PTB task, we set all dropout rates to 0.1. In the character-level PTB task, all dropout rates except dropoute were set to 0.1, which was set to 0. In the enwik8 benchmark, all dropout rates were set to 0; 3) We trained the word-level PTB models for 60 epochs, the character-level PTB models for 500 epochs and the enwik8 models for 35 epochs. We compared the same four models described in the previous subsection. As in Merity et al. (2018a), we used the Adam optimizer and thus only optimized the α, β, λ hyper-parameters for the experiments in this subsection. For the hyper-parameter α in the chain model and the hyper-parameter λ in the scaled identity and random orthogonal models, we searched over 21 values uniformly spaced between 0.05 and 1.05 (inclusive); whereas for the chain with feedback model, we set the feedforward connection weight, α, to the optimal value it had in the chain model and searched over 21 β values uniformly spaced between 0.01 and 0.21 (inclusive). In addition, we repeated each experiment 3 times using different random seeds, yielding a total of 63 runs for each model and each benchmark. The results are shown in Figure 4 and in Table 1. Figure 4 shows the validation loss over the course of training in units of bits per character (bpc). Table 1 reports the test losses at the end of training. The non-normal models outperform the normal models on the word-level and character-level PTB benchmarks. The differences between the models are less clear on the enwik8 benchmark. However, in terms of the test loss, the non-normal feedback chain model outperforms the other models on all three benchmarks (Table 1). We note that the vanilla RNN models perform significantly worse than the gated RNN architectures considered in Merity et al. (2018a;b). We conjecture that this is because gated architectures are generally better at modeling contextual dependencies, hence they have inductive biases better suited to language modeling tasks. The primary benefit of non-normal dynamics, on the other hand, is enabling a longer memory capacity. Below, we will discuss whether non-normal dynamics can be used in gated RNN architectures to improve performance as well. 3.2 HIDDEN FEEDFORWARD STRUCTURES IN TRAINED RNNS We observed that training made vanilla RNNs initialized with orthogonal recurrent connectivity matrices non-normal. We quantified the non-normality of the trained recurrent connectivity matrices using a measure introduced by Henrici (1962): d(W) ≡ √ ‖W‖2F − ∑ i |λi|2, where ‖ · ‖F denotes the Frobenius norm and λi is the i-th eigenvalue of W. This measure equals 0 for all normal matrices and is positive for non-normal matrices. We found that d(W) became positive for all successfully trained RNNs initialized with orthogonal recurrent connectivity matrices. Table 2 reports the aggregate statistics of d(W) for orthogonally initialized RNNs trained on the toy benchmarks. Although increased non-normality in trained RNNs is an interesting observation, the Henrici index, by itself, does not tell us what structural features in trained RNNs contribute to this increased non-normality. Given the benefits of chain-like feedforward non-normal structures in RNNs for improved memory, we hypothesized that training might have installed hidden chain-like feedforward structures in trained RNNs and that these feedforward structures were responsible for their increased non-normality. To uncover these hidden feedforward structures, we performed an analysis suggested by Rajan et al. (2016). In this analysis, we first injected a unit pulse of input to the network at the beginning of the trial and let the network evolve for 100 time steps afterwards according to its recurrent dynamics with no direct input. We then ordered the recurrent units by the time of their peak activity (using a small amount of jitter to break potential ties between units) and plotted the mean recurrent connection weights, Wij , as a function of the order difference between two units, i− j. Positive i− j values correspond to connections from earlier peaking units to later peaking units, and vice versa for negative i− j values. In trained RNNs, the mean recurrent weight profile as a function of i− j had an asymmetric peak, with connections in the “forward” direction being, on average, stronger than those in the opposite direction. Figure 5 shows examples with orthogonally initialized RNNs trained on the addition and the permuted sequential MNIST tasks. Note that for a purely feedforward chain, the weight profile would have a single peak at i− j = 1 and would be zero elsewhere. Although the weight profiles for trained RNNs are not this extreme, the prominent asymmetric bump with a peak at a positive i− j value indicates a hidden chain-like feedforward structure in these networks. 3.3 DO BENEFITS OF NON-NORMAL DYNAMICS EXTEND TO GATED RNN ARCHITECTURES? So far, we have only considered vanilla RNNs. An important question is whether the benefits of non-normal dynamics demonstrated above for vanilla RNNs also extend to gated RNN architectures like LSTMs or GRUs (Hochreiter & Schmidhuber, 1997; Cho et al., 2014). Gated RNN architectures have better inductive biases than vanilla RNNs in many practical tasks of interest such as language modeling (e.g. see Table 1 for a comparison of vanilla RNN architectures with an LSTM architecture of similar size in the language modeling benchmarks), thus it would be practically very useful if their performance could be improved through an inductive bias for non-normal dynamics. To address this question, we treated the input, forget, output, and update gates of the LSTM architecture as analogous to vanilla RNNs and initialized the recurrent and input matrices inside these gates in the same way as in the chain or the orthogonal initialization of vanilla RNNs above. We also compared these with a more standard initialization scheme where all the weights were drawn from a uniform distribution U(− √ k, √ k) where k is the reciprocal of the hidden layer size (labeled plain in Table 3). This is the default initializer for the LSTM weight matrices in PyTorch: https://pytorch.org/docs/stable/nn.html#lstm. We compared these initializers in the language modeling benchmarks. The chain initializer did not perform better than the orthogonal initializer (Table 3), suggesting that non-normal dynamics in gated RNN architectures may not be as helpful as it is in vanilla RNNs. In hindsight, this is not too surprising, because our initial motivation for introducing non-normal dynamics heavily relied on the vanilla RNN architecture and gated RNNs can be dynamically very different from vanilla RNNs. When we looked at the trained LSTM weight matrices more closely, we found that, although still non-normal, the recurrent weight matrices inside the input, forget, and output gates (i.e. the sigmoid gates) did not have the same signatures of hidden chain-like feedforward structures observed in vanilla RNNs. Specifically, the weight profiles in the LSTM recurrent weight matrices inside these three gates did not display the asymmetric bump characteristic of a prominent chain-like feedforward structure, but were instead approximately monotonic functions of i − j (Figure 6a-c), suggesting a qualitatively different kind of dynamics where the individual units are more persistent over time. The recurrent weight matrix inside the update gate (the tanh gate), on the other hand, did display the signature of a hidden chain-like feedforward structure (Figure 6d). When we incorporated these two structures in different gates of the LSTMs, by using a chain initializer for the update gate and a monotonically increasing recurrent weight profile for the other gates (labeled mixed in Table 3), the resulting initializer outperformed the other initializers on character-level PTB and enwik8 tasks. 4 DISCUSSION Motivated by their optimal memory properties in a simplified linear setting (Ganguli et al., 2008), in this paper, we investigated the potential benefits of certain highly non-normal chain-like RNN architectures in capturing long-term dependencies in sequential tasks. Our results demonstrate an advantage for such non-normal architectures as initializers for vanilla RNNs, compared to the commonly used orthogonal initializers. We further found evidence for the induction of such chainlike feedforward structures in trained vanilla RNNs even when these RNNs were initialized with orthogonal recurrent connectivity matrices. The benefits of these chain-like non-normal initializers do not directly carry over to more complex, gated RNN architectures such as LSTMs and GRUs. In some important practical problems such as language modeling, the gains from using these kinds of gated architectures seem to far outweigh the gains obtained from the non-normal initializers in vanilla RNNs (see Table 1). However, we also uncovered important regularities in trained LSTM weight matrices, namely that the recurrent weight profiles of the input, forget, and output gates (the sigmoid gates) in trained LSTMs display a monotonically increasing pattern, whereas the recurrent matrix inside the update gate (the tanh gate) displays a chain-like feedforward structure similar to that observed in vanilla RNNs (Figure 6). We showed that these regularities can be exploited to improve the training and/or generalization performance of gated RNN architectures by introducing them as useful inductive biases. A concurrent work to ours also emphasized the importance of non-normal dynamics in RNNs (Kerg et al., 2019). The main difference between Kerg et al. (2019) and our work is that we explicitly introduce sequential motifs in RNNs at initialization as a useful inductive bias for improved long-term memory (motivated by the optimal memory properties of these motifs in simpler cases), whereas their approach does not constrain the shape of the non-normal part of the recurrent connectivity matrix, hence does not utilize sequential non-normal dynamics as an inductive bias. In some of their tasks, Kerg et al. (2019) also uncovered a feedforward, chain-like motif in trained vanilla RNNs similar to the one reported in this paper (Figure 5). There is a close connection between the identity initialization of RNNs (Le et al., 2015) and the widely used identity skip connections (or residual connections) in deep feedforward networks (He et al., 2016). Given the superior performance of chain-like non-normal initializers over the identity initialization demonstrated in the context of vanilla RNNs in this paper, it could be interesting to look for similar chain-like non-normal architectural motifs that could be used in deep feedforward networks in place of the identity skip connections. A DETAILS AND EXTENSIONS OF THE LINEAR DECODING EXPERIMENTS This appendix contains the details of the linear decoding experiments in section 2.2 and reports the results of additional linear decoding experiments. The experiments in section 2.2 compare the signal propagation properties of vanilla RNNs with either random orthogonal or chain connectivity matrices. In both cases, the overall scale of the recurrent connectivity matrices is set to 1.01. The input weight vector is v = [1, 0, 0, . . . , 0]> for the chain model and vi ∼ N (0, 1/ √ n) for the random orthogonal model (thus the overall scales of both the feedforward and the recurrent inputs are identical in the two models). The RNNs themselves are not trained in these experiments. At each time point, an i.i.d. random scalar signal st ∼ N (0, 1) is fed into the network as input (Equation 5). We simulate 250 trials for each model and ask how well we can linearly decode the signal at the first time step, s1, from the recurrent activities at time step 100, h100. We do this by linearly regressing s1 on h100 (using the 250 simulated samples) and report the R2 value for the linear regression in Figure 2. In simulations with noise (Figure 2b), an additional i.i.d. random noise term, zit ∼ N (0, σ), is added to each recurrent neuron i at each time step t. The standard deviation of the noise, σ, is set to 0.1 in the experiments shown in Figure 2b. To show that the results are not sensitive to the noise scale, we ran additional experiments with lower (σ = 0.01) and higher (σ = 1) levels of noise (Figure 7). In both cases, the chain network still outperforms the orthogonal network. Note that these “linear + noise” experiments satisfy the conditions of the analytical theory in Ganguli et al. (2008), so these results are as expected from the theory. As mentioned in the main text, the “non-linear + no noise” experiments reported in Figure 2c used the elu non-linearity. To show that the results are not sensitive to the choice of the non-linearity, we also ran additional experiments with tanh and relu non-linearities (Figure 8). As with the elu non-linearity, the chain network outperforms the orthogonal network with the tanh and relu non-linearities as well, suggesting that the results are not sensitive to the choice of the non-linearity. B THE EFFECT OF THE FEEDBACK STRENGTH PARAMETER (β) IN THE CHAIN WITH FEEDBACK MODEL In this appendix, we consider the effect of the feedback strength parameter, β, for the chain with feedback model in the context of the experiments reported in section 3.1.1. We focus on the psMNIST task specifically, because this is the only task where the feedback chain model converges to a low loss solution for a sufficiently large number of hyper-parameter configurations. For the addition and copy tasks, there are not enough successful hyper-parameter configurations to draw reliable inferences about the effect of β (see Figure 3d-f). Figure 9 shows the validation loss at the end of training as a function of β in the psMNIST task. In this figure, we considered all networks that achieved a validation loss lower than the random baseline model (i.e. < log(10) ≈ 2.3) at the end of training (an overwhelming majority of the networks satisfied this criterion). Figure 9 shows that the final validation loss is a monotonically increasing function of β in this task, suggesting that large feedback strengths are harmful for the model performance. C COMPARISON WITH PREVIOUS MODELS In this appendix, we compare our results with those obtained by previous models, focusing specifically on the experiments in section 3.1.1 (because the tasks in this section are commonly used as RNN benchmarks). uRNN: We first note that our copy and addition tasks use the largest sequence lengths considered in Arjovsky et al. (2016) for the same tasks (T = 500 for the copy task and T = 750 for the addition task). Hence our results are directly comparable to those reported in Arjovsky et al. (2016) (the random baselines shown by the dashed lines in Figure 3a-b are identical to those in Arjovsky et al. (2016) for the same conditions). The unitary evolution RNN (uRNN) model proposed in Arjovsky et al. (2016) comfortably learns the copy-500 task (with 128 recurrent units), quickly reaching a near-zero loss (see their Figure 1, bottom right); however, it struggles with the addition task, barely reaching the half-baseline criterion even with 512 recurrent units (see their Figure 2, bottom right). This difference in the behavior of the uRNN model in the copy and addition tasks is predicted by Henaff et al. (2016), where it is shown that random orthogonal and near-identity recurrent connectivity matrices have much better inductive biases in the copy and addition tasks, respectively. Because of its parametrization, uRNN behaves more similarly to a random orthogonal RNN than a near-identity RNN. In contrast, our non-normal RNNs, especially the chain model, comfortably clear the half-baseline criterion both in copy-500 and addition-750 tasks (with 100 recurrent units), quickly achieving very small loss values in both tasks with the optimal hyper-parameter configurations (Figure 3a-b). Note that this is despite the fact that our models use fewer recurrent units than the uRNN model in Arjovsky et al. (2016) (100 vs. 128 or 512 recurrent units). nnRNN: Kerg et al. (2019) report results for the copy (T = 200) and psMNIST tasks only. They have not reported training success for longer variants of the copy task (specifically for T = 500). Kerg et al. (2019) also have not reported successful training in the addition task, whereas our non-normal RNNs showed training success both in copy-500 and addition-750 tasks (Figure 3a-b). We conclude that our non-normal initializers for vanilla RNNs perform comparably to, or better than, the uRNN and nnRNN models in standard long-term memory benchmarks. One of the biggest strengths of our proposal compared to these previous models is its much greater simplicity. Both uRNN and nnRNN require a complete re-parametrization of the vanilla RNN model (nnRNN even requires a novel optimization method). Our method, on the other hand, proposes much simpler, easy-to-implement, plug-and-play type sequential initializers that keep the standard parametrization of RNNs intact. critical RNN: Chen et al. (2018) note that the conditions for dynamical isometry in vanilla RNNs are identical to those in fully-connected feed-forward networks studied in Pennington et al. (2017). Pennington et al. (2017), in turn, note that dynamical isometry is not achievable exactly in networks with relu activation, but it is achievable in networks with tanh activation, where it essentially boils down to initializing the weights to small values. Pennington et al. (2017) give a specific example of a dynamically isometric tanh network (with n = 400, σw = 1.05, and σb = 2.01× 10−5). We set up a similar tanh RNN model, but were not able to train it successfully in the copy or addition tasks. Again, as with the nnRNN results, this shows the challenging nature of these two tasks and suggests that dynamical isometry may not be enough for successful training in these tasks. A possible reason for this is that although critical initialization takes the non-linearity into account, it still does not take the noise into account (i.e. it is not guaranteed to maximize the SNR). LSTM, tanh RNN: Consistent with the results in Arjovsky et al. (2016), we were not able to successfully train LSTMs or vanilla RNNs with tanh non-linearity in the challenging copy-500 and addition-750 tasks. Therefore, these models were not included as baselines in section 3.1.1.
1. What is the main contribution of the paper regarding non-normal initializations for training vanilla RNNs? 2. What are the strengths of the proposed approach, particularly in its exploration of non-normal RNNs and its demonstration of outperformance over orthogonal counterparts on certain tasks? 3. What are some concerns or limitations of the paper, such as the small size of the plots and the lack of comparison against other architectures like LSTMs and Transformers? 4. Are there any questions or areas for further exploration that the reviewer has identified, such as the stability of the non-normal RNNs or the potential for more interesting initialization methods for gated architectures?
Review
Review The focus of this paper is on exploring non-normal initializations for training vanilla RNN for sequential tasks. They show on 3 different tasks, and a real-world LM task that non-normal initializations of vanilla RNNs outperform their orthogonal counter-parts when particular forms of initialization are considered. Although the results for sequence task do not outperform the gated counterparts, the authors present an interesting exploration of initializing non-normal RNNs that outperform the orthogonal counterparts. It is good to see this line of work being explored as an alternative to exploring more complex architectures with many more parameters than necessary for the task. Strengths: 1. The paper explores non-normal RNNs and demonstrates on 3 synthetic tasks - copy, addition and pMNIST - how with careful initialization the proposed approach outperforms their orthogonal initialization counterpart. This line of experimentation is interesting as it potentially opens the door for more expressive modeling for sequential tasks by expanding the solution space of the weight matrices being learnt i.e orthogonal matrices are a special case. 2. The authors do a great job in motivating the paper, and the explanation is clear and easily understandable. The toy simulations in Section2.2 really helps drive the reasoning behind why chain initialization improves over orthogonal initialization. 3. Based on the insight from trained RNNs where the trained weights exhibit a chain like structure, the authors attempt to modify the LSTM gate initializations well. However, they do not see any specific gain by doing so, and moreover they show analysis that demonstrate that the LSTM gates do not learn these chain like structures. However, they do have insight into the regularities of these learnt weights which could potentially open the door for more interesting initialization methods for training such gated architectures. Issues to be addressed in the paper: 1. The plots are quite small and hard to follow. Can the authors enlarge these so they span the entire page? Also, for pMNIST it would be good to provide accuracy scores as well as a function of the training epochs.Finally, it would be good to include a comparison against LSTMs (and even Transformer networks) so it is easier for the reader to see where these approaches stack against architecture changes. 2. The authors are missing a reference to this work - http://proceedings.mlr.press/v48/henaff16.pdf - which provides empirical analysis for the 3 synthetic tasks to test the ability of vanilla RNNs for solving long span sequential tasks. 3. What about stability of these non-normal RNNs? For example, if we perturb the inputs to the training for the LM task how much variance do we see in the performance of these models?
ICLR
Title Improved memory in recurrent neural networks with sequential non-normal dynamics Abstract Training recurrent neural networks (RNNs) is a hard problem due to degeneracies in the optimization landscape, a problem also known as vanishing/exploding gradients. Short of designing new RNN architectures, previous methods for dealing with this problem usually boil down to orthogonalization of the recurrent dynamics, either at initialization or during the entire training period. The basic motivation behind these methods is that orthogonal transformations are isometries of the Euclidean space, hence they preserve (Euclidean) norms and effectively deal with vanishing/exploding gradients. However, this ignores the crucial effects of non-linearity and noise. In the presence of a non-linearity, orthogonal transformations no longer preserve norms, suggesting that alternative transformations might be better suited to non-linear networks. Moreover, in the presence of noise, norm preservation itself ceases to be the ideal objective. A more sensible objective is maximizing the signalto-noise ratio (SNR) of the propagated signal instead. Previous work has shown that in the linear case, recurrent networks that maximize the SNR display strongly non-normal, sequential dynamics and orthogonal networks are highly suboptimal by this measure. Motivated by this finding, here we investigate the potential of non-normal RNNs, i.e. RNNs with a non-normal recurrent connectivity matrix, in sequential processing tasks. Our experimental results show that non-normal RNNs outperform their orthogonal counterparts in a diverse range of benchmarks. We also find evidence for increased non-normality and hidden chain-like feedforward motifs in trained RNNs initialized with orthogonal recurrent connectivity matrices. 1 INTRODUCTION Modeling long-term dependencies with recurrent neural networks (RNNs) is a hard problem due to degeneracies inherent in the optimization landscapes of these models, a problem also known as the vanishing/exploding gradients problem (Hochreiter, 1991; Bengio et al., 1994). One approach to addressing this problem has been designing new RNN architectures that are less prone to such difficulties, hence are better able to capture long-term dependencies in sequential data (Hochreiter & Schmidhuber, 1997; Cho et al., 2014; Chang et al., 2017; Bai et al., 2018). An alternative approach is to stick with the basic vanilla RNN architecture instead, but to constrain its dynamics in some way so as to eliminate or reduce the degeneracies that otherwise afflict the optimization landscape. Previous proposals belonging to this second category generally boil down to orthogonalization of the recurrent dynamics, either at initialization or during the entire training period (Le et al., 2015; Arjovsky et al., 2016; Wisdom et al., 2016). The basic idea behind these methods is that orthogonal transformations are isometries of the Euclidean space, hence they preserve distances and norms, which enables them to deal effectively with the vanishing/exploding gradients problem. However, this idea ignores the crucial effects of non-linearity and noise. Orthogonal transformations no longer preserve distances and norms in the presence of a non-linearity, suggesting that alternative transformations might be better suited to non-linear networks (this point was noted by Pennington et al. (2017) and Chen et al. (2018) before, where isometric initializations that take the non-linearity into account were proposed). Similarly, in the presence of noise, norm preservation itself ceases to be the ideal objective. One must instead maximize the signal-to-noise ratio (SNR) of the propagated signal. In neural networks, noise comes in both through the stochasticity of the stochastic gradient descent (SGD) algorithm and sometimes also through direct noise injection for regularization purposes, as in dropout (Srivastava et al., 2014). Previous work has shown that even in a simple linear setting, recurrent networks that maximize the SNR display strongly non-normal, sequential dynamics and orthogonal networks are highly suboptimal by this measure (Ganguli et al., 2008). Motivated by these observations, in this paper, we investigate the potential of non-normal RNNs, i.e. RNNs with a non-normal recurrent connectivity matrix, in sequential processing tasks. Recall that a normal matrix is a matrix with an orthonormal set of eigenvectors, whereas a non-normal matrix does not have an orthonormal set of eigenvectors. This property allows non-normal systems to display interesting transient behaviors that are not available in normal systems. This kind of transient behavior, specifically a particular kind of transient amplification of the signal in certain non-normal systems, underlies their superior memory properties (Ganguli et al., 2008), as will be discussed further below. Our empirical results show that non-normal vanilla RNNs significantly outperform their orthogonal counterparts in a diverse range of benchmarks.1 2 BACKGROUND 2.1 MEMORY IN LINEAR RECURRENT NETWORKS WITH NOISE Ganguli et al. (2008) studied memory properties of linear recurrent networks injected with a scalar temporal signal st, and noise zt: ht = Wht−1 + vst + zt (1) The noise is assumed to be i.i.d. with zt ∼ N (0, I). Ganguli et al. (2008) then analyzed the Fisher memory matrix (FMM) of this system, defined as: Jkl(s≤t) = 〈 − ∂ 2 ∂st−k∂st−l log p(ht|s≤t) 〉 p(ht|s≤t) (2) For linear networks with Gaussian noise, it is easy to show that Jkl(s≤t) is, in fact, independent of the past signal history s≤t. Ganguli et al. (2008) specifically analyzed the diagonal of the FMM: J(k) ≡ Jkk, which can be written explicitly as: J(k) = v>Wk>C−1Wkv (3) where C = ∑∞ k=0 W kWk> is the noise covariance matrix, and the norm of Wkv can be roughly thought of as representing the signal strength. The total Fisher memory is the sum of J(k) over all past time steps k: Jtot = ∞∑ k=0 J(k) (4) Intuitively, J(k) measures the information contained in the current state of the system, ht, about a signal that entered the system k time steps ago, st−k. Jtot is then a measure of the total information contained in the current state of the system about the entire past signal history, s≤t. The main result in Ganguli et al. (2008) shows that Jtot = 1 for all normal matrices W (including all orthogonal matrices), whereas in general Jtot ≤ N , where N is the network size. Remarkably, the memory upper bound can be achieved by certain highly non-normal systems and several examples are explicitly given in Ganguli et al. (2008). Two of those examples are illustrated in Figure 1a (right): a uni-directional “chain” network and a chain network with feedback. In the chain network, the recurrent connectivity is given by Wij = αδj,i−1 and in the chain with feedback network, it is given by Wij = αδj,i−1 + βδj,i+1, where α and β are the feedforward and feedback connection weights, respectively (here δ denotes the Kronecker delta function). In addition, in order to achieve optimal memory, the signal must be fed at the source neuron in these networks, i.e. v = [1, 0, 0, . . . , 0]>. Figure 1b compares the Fisher memory curves, J(k), of these non-normal networks with the Fisher memory curves of two example normal networks, namely recurrent networks with identity or random orthogonal connectivity matrices. The two non-normal networks have extensive memory capacity, i.e. Jtot ∼ O(N), whereas for the normal examples, Jtot = 1. The crucial property that enables extensive memory in non-normal networks is transient amplification: after the signal enters the network, it is amplified supralinearly for a time of length O(N) before it eventually dies out (Figure 1c). This kind of transient amplification is not possible in normal networks. 1Code available at: https://github.com/eminorhan/nonnormal-init 2.2 A TOY NON-LINEAR EXAMPLE: NON-LINEARITY AND NOISE INDUCE SIMILAR EFFECTS The preceding analysis by Ganguli et al. (2008) is exact in linear networks. Analysis becomes more difficult in the presence of a non-linearity. However, we now demonstrate that the non-normal networks shown in Figure 1a have advantages that extend beyond the linear case. The advantages in the non-linear case are due to reduced interference in these non-normal networks between signals entering the network at different time points in the past. To demonstrate this with a simple example, we will ignore the effect of noise for now and consider the effect of non-linearity on the linear decodability of past signals from the current network activity. We thus consider deterministic non-linear networks of the form (see Appendix A for additional details): ht = f(Wht−1 + vst) (5) and ask how well we can linearly decode a signal that entered the network k time steps ago, st−k, from the current activity of the network, ht. Figure 2c compares the decoding performance in a non-linear orthogonal network with the decoding performance in the non-linear chain network. Just as in the linear case with noise (Figure 2b), the chain network outperforms the orthogonal network. To understand intuitively why this is the case, consider a chain network with Wij = δj,i−1 and v = [1, 0, 0, . . . , 0]>. In this model, the responses of the N neurons after N time steps (at t = N ) are given by f(sN ), f(f(sN−1)), ..., f(f(. . . f(s1) . . .)), respectively, starting from the source neuron. Although the non-linearity f(·) makes perfect linear decoding of the past signal st−k impossible, one may still imagine being able to decode the past signal with reasonable accuracy as long as f(·) is not “too non-linear”. A similar intuition holds for the chain network with feedback as well, as long as the feedforward connection weight, α, is sufficiently stronger than the feedback connection strength, β. A condition like this must already be satisfied if the network is to maintain its optimal memory properties and also be dynamically stable at the same time (Ganguli et al., 2008). In normal networks, however, linear decoding is further degraded by interference from signals entering the network at different time points, in addition to the degradation caused by the nonlinearity. This is easiest to see in the identity network (a similar argument holds for the random orthogonal example too), where the responses of the neurons after N time steps are identically given by f(f(. . . f(f(s1)+s2) . . .)+sN ), if one assumes v = [1, 1, 1, . . . , 1]>. Linear decoding is harder in this case, because a signal st−k is both distorted by multiple steps of non-linearity and also mixed with signals entering at other time points. 3 RESULTS 3.1 EXPERIMENTS Because assuming an a priori fixed non-normal structure for an RNN runs the risk of being too restrictive, in this paper, we instead explore the promise of non-normal networks as initializers for RNNs. Throughout the paper, we will be primarily comparing the four RNN architectures schematically depicted in Figure 1a as initializers: two of them normal networks (identity and random orthogonal) and the other two non-normal networks (chain and chain with feedback), the last two being motivated by their optimal memory properties in the linear case, as reviewed above. 3.1.1 COPY, ADDITION, PERMUTED SEQUENTIAL MNIST Copy, addition, and permuted sequential MNIST tasks were commonly used as benchmarks in previous RNN studies (Arjovsky et al., 2016; Bai et al., 2018; Chang et al., 2017; Hochreiter & Schmidhuber, 1997; Le et al., 2015; Wisdom et al., 2016). We now briefly describe each of these tasks. Copy task: The input is a sequence of integers of length T . The first 10 integers in the sequence define the target subsequence that is to be copied and consist of integers between 1 and 8 (inclusive). The next T − 21 integers are set to 0. The integer after that is set to 9, which acts as the cue indicating that the model should start copying the target subsequence. The final 10 integers are set to 0. The output sequence that the model is trained to reproduce consists of T − 10 0s followed by the target subsequence from the input that is to be copied. To make sure that the task requires a sufficiently long memory capacity, we used a large sequence length, T = 500, comparable to the largest sequence length considered in Arjovsky et al. (2016) for the same task. Addition task: The input consists of two sequences of length T . The first one is a sequence of random numbers drawn uniformly from the interval [0, 1]. The second sequence is an indicator sequence with 1s at exactly two positions and 0s everywhere else. The positions of the two 1s indicate the positions of the numbers to be added in the first sequence. The target output is the sum of the two corresponding numbers. The position of the first 1 is drawn uniformly from the first half of the sequence and the position of the second 1 is drawn uniformly from the second half of the sequence. Again, to ensure that the task requires a sufficiently long memory capacity, we chose T = 750, which is the same as the largest sequence length considered in Arjovsky et al. (2016) for the same task. Permuted sequential MNIST (psMNIST): This is a sequential version of the standard MNIST benchmark where the pixels are fed to the model one pixel at a time. To make the task hard enough, we used the permuted version of the sequential MNIST task where a fixed random permutation is applied to the pixels to eliminate any spatial structure before they are fed into the model. We used vanilla RNNs with N = 25 recurrent units in the psMNIST task and N = 100 recurrent units in the copy and addition tasks. We used the elu nonlinearity for the copy and the psMNIST tasks (Clevert et al., 2016), and the relu nonlinearity for the addition problem (because relu proved to be more natural for remembering positive numbers). Batch size was 16 in all tasks. As mentioned above, the scaled identity and the scaled random orthogonal networks constituted the normal initializers. In the scaled identity initializer, the recurrent connectivity matrix was initialized as W = λI and the input matrix V was initialized as Vij ∼ N (0, 0.9/ √ N). In the random orthogonal initializer, the recurrent connectivity matrix was initialized as W = λQ, where Q is a random dense orthogonal matrix, and the input matrix V was initialized in the same way as in the identity initializer. The feedforward chain and the chain with feedback networks constituted our non-normal initializers. In the chain initializer, the recurrent connectivity matrix was initialized as Wij = αδj,i−1 and the input matrix V was initialized as V ∼ 0.9IN×d, where IN×d denotes the N ×d-dimensional identity matrix. Note that this choice of V is a natural generalization of the the source injecting input vector that was found to be optimal in the linear case with scalar signals to multi-dimensional inputs (as long as N d). In the chain with feedback initializer, the recurrent connectivity matrix was initialized as Wij = 0.99δj,i−1 + βδj,i+1 and the input matrix V was initialized in the same way as in the chain initializer. We used the rmsprop optimizer for all models, which we found to be the best method for this set of tasks. The learning rate of the optimizer was a hyperparameter which we tuned separately for each model and each task. The following learning rates were considered in the hyper-parameter search: 8×10−4, 5×10−4, 3×10−4, 10−4, 8×10−5, 5×10−5, 3×10−5, 10−5, 8×10−6, 5×10−6, 3×10−6. We ran each model on each task 6 times using the integers from 1 to 6 as random seeds. In addition, the following model-specific hyperparameters were searched over for each task: Chain: feedforward connection weight, α ∈ {0.99, 1.00, 1.01, 1.02, 1.03, 1.04, 1.05}. Chain with feedback: feedback connection weight, β ∈ {0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07}. Scaled identity: scale, λ ∈ {0.01, 0.96, 0.99, 1.0, 1.01, 1.02, 1.03, 1.04, 1.05}. Random orthogonal: scale, λ ∈ {0.01, 0.96, 0.99, 1.0, 1.01, 1.02, 1.03, 1.04, 1.05}. This yields a total of 7× 11× 6 = 462 different runs for each experiment in the non-normal models and a total of 9× 11× 6 = 594 different runs in the normal models. Note that we ran more extensive hyper-parameter searches for the normal models than for the non-normal models in this set of tasks. Figure 3a-c shows the validation losses for each model with the best hyper-parameter settings. The non-normal initializers generally outperform the normal initializers. Figure 3d-f shows for each model the number of “successful” runs that converged to a validation loss below a criterion level (which we set to be 50% of the loss for a baseline random model). The chain model outperformed all other models by this measure (despite having a smaller total number of runs than the normal models). In the copy task, for example, none of the runs for the normal models was able to achieve the criterion level, whereas 46 out of 462 runs for the chain model and 11 out of 462 runs for the feedback chain model reached the criterion loss (see Appendices B & C for further results and discussion). 3.1.2 LANGUAGE MODELING EXPERIMENTS To investigate if the benefits of nonnormal initializers extend to more realistic problems, we conducted experiments with three standard language modeling tasks: word-level Penn Treebank (PTB), character-level PTB, and character-level enwik8 benchmarks. For the language modeling experiments in this subsection, we used the code base provided by Salesforce Research (Merity et al., 2018a;b): https://github.com/ salesforce/awd-lstm-lm. We refer the reader to Merity et al. (2018a;b) for a more detailed description of the benchmarks. For the experiments in this subsection, we generally preserved the model setup used in Merity et al. (2018a;b), except for the following differences: 1) We replaced the gated RNN architectures (LSTMs and QRNNs) used in Merity et al. (2018a;b) with vanilla RNNs; 2) We observed that vanilla RNNs require weaker regularization than gated RNN architectures. Therefore, in the word-level PTB task, we set all dropout rates to 0.1. In the character-level PTB task, all dropout rates except dropoute were set to 0.1, which was set to 0. In the enwik8 benchmark, all dropout rates were set to 0; 3) We trained the word-level PTB models for 60 epochs, the character-level PTB models for 500 epochs and the enwik8 models for 35 epochs. We compared the same four models described in the previous subsection. As in Merity et al. (2018a), we used the Adam optimizer and thus only optimized the α, β, λ hyper-parameters for the experiments in this subsection. For the hyper-parameter α in the chain model and the hyper-parameter λ in the scaled identity and random orthogonal models, we searched over 21 values uniformly spaced between 0.05 and 1.05 (inclusive); whereas for the chain with feedback model, we set the feedforward connection weight, α, to the optimal value it had in the chain model and searched over 21 β values uniformly spaced between 0.01 and 0.21 (inclusive). In addition, we repeated each experiment 3 times using different random seeds, yielding a total of 63 runs for each model and each benchmark. The results are shown in Figure 4 and in Table 1. Figure 4 shows the validation loss over the course of training in units of bits per character (bpc). Table 1 reports the test losses at the end of training. The non-normal models outperform the normal models on the word-level and character-level PTB benchmarks. The differences between the models are less clear on the enwik8 benchmark. However, in terms of the test loss, the non-normal feedback chain model outperforms the other models on all three benchmarks (Table 1). We note that the vanilla RNN models perform significantly worse than the gated RNN architectures considered in Merity et al. (2018a;b). We conjecture that this is because gated architectures are generally better at modeling contextual dependencies, hence they have inductive biases better suited to language modeling tasks. The primary benefit of non-normal dynamics, on the other hand, is enabling a longer memory capacity. Below, we will discuss whether non-normal dynamics can be used in gated RNN architectures to improve performance as well. 3.2 HIDDEN FEEDFORWARD STRUCTURES IN TRAINED RNNS We observed that training made vanilla RNNs initialized with orthogonal recurrent connectivity matrices non-normal. We quantified the non-normality of the trained recurrent connectivity matrices using a measure introduced by Henrici (1962): d(W) ≡ √ ‖W‖2F − ∑ i |λi|2, where ‖ · ‖F denotes the Frobenius norm and λi is the i-th eigenvalue of W. This measure equals 0 for all normal matrices and is positive for non-normal matrices. We found that d(W) became positive for all successfully trained RNNs initialized with orthogonal recurrent connectivity matrices. Table 2 reports the aggregate statistics of d(W) for orthogonally initialized RNNs trained on the toy benchmarks. Although increased non-normality in trained RNNs is an interesting observation, the Henrici index, by itself, does not tell us what structural features in trained RNNs contribute to this increased non-normality. Given the benefits of chain-like feedforward non-normal structures in RNNs for improved memory, we hypothesized that training might have installed hidden chain-like feedforward structures in trained RNNs and that these feedforward structures were responsible for their increased non-normality. To uncover these hidden feedforward structures, we performed an analysis suggested by Rajan et al. (2016). In this analysis, we first injected a unit pulse of input to the network at the beginning of the trial and let the network evolve for 100 time steps afterwards according to its recurrent dynamics with no direct input. We then ordered the recurrent units by the time of their peak activity (using a small amount of jitter to break potential ties between units) and plotted the mean recurrent connection weights, Wij , as a function of the order difference between two units, i− j. Positive i− j values correspond to connections from earlier peaking units to later peaking units, and vice versa for negative i− j values. In trained RNNs, the mean recurrent weight profile as a function of i− j had an asymmetric peak, with connections in the “forward” direction being, on average, stronger than those in the opposite direction. Figure 5 shows examples with orthogonally initialized RNNs trained on the addition and the permuted sequential MNIST tasks. Note that for a purely feedforward chain, the weight profile would have a single peak at i− j = 1 and would be zero elsewhere. Although the weight profiles for trained RNNs are not this extreme, the prominent asymmetric bump with a peak at a positive i− j value indicates a hidden chain-like feedforward structure in these networks. 3.3 DO BENEFITS OF NON-NORMAL DYNAMICS EXTEND TO GATED RNN ARCHITECTURES? So far, we have only considered vanilla RNNs. An important question is whether the benefits of non-normal dynamics demonstrated above for vanilla RNNs also extend to gated RNN architectures like LSTMs or GRUs (Hochreiter & Schmidhuber, 1997; Cho et al., 2014). Gated RNN architectures have better inductive biases than vanilla RNNs in many practical tasks of interest such as language modeling (e.g. see Table 1 for a comparison of vanilla RNN architectures with an LSTM architecture of similar size in the language modeling benchmarks), thus it would be practically very useful if their performance could be improved through an inductive bias for non-normal dynamics. To address this question, we treated the input, forget, output, and update gates of the LSTM architecture as analogous to vanilla RNNs and initialized the recurrent and input matrices inside these gates in the same way as in the chain or the orthogonal initialization of vanilla RNNs above. We also compared these with a more standard initialization scheme where all the weights were drawn from a uniform distribution U(− √ k, √ k) where k is the reciprocal of the hidden layer size (labeled plain in Table 3). This is the default initializer for the LSTM weight matrices in PyTorch: https://pytorch.org/docs/stable/nn.html#lstm. We compared these initializers in the language modeling benchmarks. The chain initializer did not perform better than the orthogonal initializer (Table 3), suggesting that non-normal dynamics in gated RNN architectures may not be as helpful as it is in vanilla RNNs. In hindsight, this is not too surprising, because our initial motivation for introducing non-normal dynamics heavily relied on the vanilla RNN architecture and gated RNNs can be dynamically very different from vanilla RNNs. When we looked at the trained LSTM weight matrices more closely, we found that, although still non-normal, the recurrent weight matrices inside the input, forget, and output gates (i.e. the sigmoid gates) did not have the same signatures of hidden chain-like feedforward structures observed in vanilla RNNs. Specifically, the weight profiles in the LSTM recurrent weight matrices inside these three gates did not display the asymmetric bump characteristic of a prominent chain-like feedforward structure, but were instead approximately monotonic functions of i − j (Figure 6a-c), suggesting a qualitatively different kind of dynamics where the individual units are more persistent over time. The recurrent weight matrix inside the update gate (the tanh gate), on the other hand, did display the signature of a hidden chain-like feedforward structure (Figure 6d). When we incorporated these two structures in different gates of the LSTMs, by using a chain initializer for the update gate and a monotonically increasing recurrent weight profile for the other gates (labeled mixed in Table 3), the resulting initializer outperformed the other initializers on character-level PTB and enwik8 tasks. 4 DISCUSSION Motivated by their optimal memory properties in a simplified linear setting (Ganguli et al., 2008), in this paper, we investigated the potential benefits of certain highly non-normal chain-like RNN architectures in capturing long-term dependencies in sequential tasks. Our results demonstrate an advantage for such non-normal architectures as initializers for vanilla RNNs, compared to the commonly used orthogonal initializers. We further found evidence for the induction of such chainlike feedforward structures in trained vanilla RNNs even when these RNNs were initialized with orthogonal recurrent connectivity matrices. The benefits of these chain-like non-normal initializers do not directly carry over to more complex, gated RNN architectures such as LSTMs and GRUs. In some important practical problems such as language modeling, the gains from using these kinds of gated architectures seem to far outweigh the gains obtained from the non-normal initializers in vanilla RNNs (see Table 1). However, we also uncovered important regularities in trained LSTM weight matrices, namely that the recurrent weight profiles of the input, forget, and output gates (the sigmoid gates) in trained LSTMs display a monotonically increasing pattern, whereas the recurrent matrix inside the update gate (the tanh gate) displays a chain-like feedforward structure similar to that observed in vanilla RNNs (Figure 6). We showed that these regularities can be exploited to improve the training and/or generalization performance of gated RNN architectures by introducing them as useful inductive biases. A concurrent work to ours also emphasized the importance of non-normal dynamics in RNNs (Kerg et al., 2019). The main difference between Kerg et al. (2019) and our work is that we explicitly introduce sequential motifs in RNNs at initialization as a useful inductive bias for improved long-term memory (motivated by the optimal memory properties of these motifs in simpler cases), whereas their approach does not constrain the shape of the non-normal part of the recurrent connectivity matrix, hence does not utilize sequential non-normal dynamics as an inductive bias. In some of their tasks, Kerg et al. (2019) also uncovered a feedforward, chain-like motif in trained vanilla RNNs similar to the one reported in this paper (Figure 5). There is a close connection between the identity initialization of RNNs (Le et al., 2015) and the widely used identity skip connections (or residual connections) in deep feedforward networks (He et al., 2016). Given the superior performance of chain-like non-normal initializers over the identity initialization demonstrated in the context of vanilla RNNs in this paper, it could be interesting to look for similar chain-like non-normal architectural motifs that could be used in deep feedforward networks in place of the identity skip connections. A DETAILS AND EXTENSIONS OF THE LINEAR DECODING EXPERIMENTS This appendix contains the details of the linear decoding experiments in section 2.2 and reports the results of additional linear decoding experiments. The experiments in section 2.2 compare the signal propagation properties of vanilla RNNs with either random orthogonal or chain connectivity matrices. In both cases, the overall scale of the recurrent connectivity matrices is set to 1.01. The input weight vector is v = [1, 0, 0, . . . , 0]> for the chain model and vi ∼ N (0, 1/ √ n) for the random orthogonal model (thus the overall scales of both the feedforward and the recurrent inputs are identical in the two models). The RNNs themselves are not trained in these experiments. At each time point, an i.i.d. random scalar signal st ∼ N (0, 1) is fed into the network as input (Equation 5). We simulate 250 trials for each model and ask how well we can linearly decode the signal at the first time step, s1, from the recurrent activities at time step 100, h100. We do this by linearly regressing s1 on h100 (using the 250 simulated samples) and report the R2 value for the linear regression in Figure 2. In simulations with noise (Figure 2b), an additional i.i.d. random noise term, zit ∼ N (0, σ), is added to each recurrent neuron i at each time step t. The standard deviation of the noise, σ, is set to 0.1 in the experiments shown in Figure 2b. To show that the results are not sensitive to the noise scale, we ran additional experiments with lower (σ = 0.01) and higher (σ = 1) levels of noise (Figure 7). In both cases, the chain network still outperforms the orthogonal network. Note that these “linear + noise” experiments satisfy the conditions of the analytical theory in Ganguli et al. (2008), so these results are as expected from the theory. As mentioned in the main text, the “non-linear + no noise” experiments reported in Figure 2c used the elu non-linearity. To show that the results are not sensitive to the choice of the non-linearity, we also ran additional experiments with tanh and relu non-linearities (Figure 8). As with the elu non-linearity, the chain network outperforms the orthogonal network with the tanh and relu non-linearities as well, suggesting that the results are not sensitive to the choice of the non-linearity. B THE EFFECT OF THE FEEDBACK STRENGTH PARAMETER (β) IN THE CHAIN WITH FEEDBACK MODEL In this appendix, we consider the effect of the feedback strength parameter, β, for the chain with feedback model in the context of the experiments reported in section 3.1.1. We focus on the psMNIST task specifically, because this is the only task where the feedback chain model converges to a low loss solution for a sufficiently large number of hyper-parameter configurations. For the addition and copy tasks, there are not enough successful hyper-parameter configurations to draw reliable inferences about the effect of β (see Figure 3d-f). Figure 9 shows the validation loss at the end of training as a function of β in the psMNIST task. In this figure, we considered all networks that achieved a validation loss lower than the random baseline model (i.e. < log(10) ≈ 2.3) at the end of training (an overwhelming majority of the networks satisfied this criterion). Figure 9 shows that the final validation loss is a monotonically increasing function of β in this task, suggesting that large feedback strengths are harmful for the model performance. C COMPARISON WITH PREVIOUS MODELS In this appendix, we compare our results with those obtained by previous models, focusing specifically on the experiments in section 3.1.1 (because the tasks in this section are commonly used as RNN benchmarks). uRNN: We first note that our copy and addition tasks use the largest sequence lengths considered in Arjovsky et al. (2016) for the same tasks (T = 500 for the copy task and T = 750 for the addition task). Hence our results are directly comparable to those reported in Arjovsky et al. (2016) (the random baselines shown by the dashed lines in Figure 3a-b are identical to those in Arjovsky et al. (2016) for the same conditions). The unitary evolution RNN (uRNN) model proposed in Arjovsky et al. (2016) comfortably learns the copy-500 task (with 128 recurrent units), quickly reaching a near-zero loss (see their Figure 1, bottom right); however, it struggles with the addition task, barely reaching the half-baseline criterion even with 512 recurrent units (see their Figure 2, bottom right). This difference in the behavior of the uRNN model in the copy and addition tasks is predicted by Henaff et al. (2016), where it is shown that random orthogonal and near-identity recurrent connectivity matrices have much better inductive biases in the copy and addition tasks, respectively. Because of its parametrization, uRNN behaves more similarly to a random orthogonal RNN than a near-identity RNN. In contrast, our non-normal RNNs, especially the chain model, comfortably clear the half-baseline criterion both in copy-500 and addition-750 tasks (with 100 recurrent units), quickly achieving very small loss values in both tasks with the optimal hyper-parameter configurations (Figure 3a-b). Note that this is despite the fact that our models use fewer recurrent units than the uRNN model in Arjovsky et al. (2016) (100 vs. 128 or 512 recurrent units). nnRNN: Kerg et al. (2019) report results for the copy (T = 200) and psMNIST tasks only. They have not reported training success for longer variants of the copy task (specifically for T = 500). Kerg et al. (2019) also have not reported successful training in the addition task, whereas our non-normal RNNs showed training success both in copy-500 and addition-750 tasks (Figure 3a-b). We conclude that our non-normal initializers for vanilla RNNs perform comparably to, or better than, the uRNN and nnRNN models in standard long-term memory benchmarks. One of the biggest strengths of our proposal compared to these previous models is its much greater simplicity. Both uRNN and nnRNN require a complete re-parametrization of the vanilla RNN model (nnRNN even requires a novel optimization method). Our method, on the other hand, proposes much simpler, easy-to-implement, plug-and-play type sequential initializers that keep the standard parametrization of RNNs intact. critical RNN: Chen et al. (2018) note that the conditions for dynamical isometry in vanilla RNNs are identical to those in fully-connected feed-forward networks studied in Pennington et al. (2017). Pennington et al. (2017), in turn, note that dynamical isometry is not achievable exactly in networks with relu activation, but it is achievable in networks with tanh activation, where it essentially boils down to initializing the weights to small values. Pennington et al. (2017) give a specific example of a dynamically isometric tanh network (with n = 400, σw = 1.05, and σb = 2.01× 10−5). We set up a similar tanh RNN model, but were not able to train it successfully in the copy or addition tasks. Again, as with the nnRNN results, this shows the challenging nature of these two tasks and suggests that dynamical isometry may not be enough for successful training in these tasks. A possible reason for this is that although critical initialization takes the non-linearity into account, it still does not take the noise into account (i.e. it is not guaranteed to maximize the SNR). LSTM, tanh RNN: Consistent with the results in Arjovsky et al. (2016), we were not able to successfully train LSTMs or vanilla RNNs with tanh non-linearity in the challenging copy-500 and addition-750 tasks. Therefore, these models were not included as baselines in section 3.1.1.
1. What is the main contribution of the paper regarding non-normal alternatives in RNNs? 2. What are the limitations of the paper's novelty compared to prior works? 3. How does the reviewer assess the clarity and ease of following the paper's content? 4. What are the suggestions for improving the experimental analysis and comparisons with other works? 5. Do you have any questions or concerns about the paper's results and findings?
Review
Review Motivated by the sub-optimality of using orthogonal recurrent matrix in RNNs with nonlinearity and noise, the authors look into non-normal alternatives, in particular matrices with chain-like structure in preserving memory in RNNs. The authors compare normal and non-normal RNNs on several sequential benchmark datasets, and show that non-normal RNNs perform better than their normal counterpart. The paper is easy to follow. The novelty of the work is limited though. The chain structure was introduced in Ganguli et al. (2008). The work studies the benefit of initializing recurrent weights in nonlinear RNNs with these chain-like structures. Chen et. al. (2018) already pointed out the limitation of orthogonal initialization alone for nonlinear RNNs, and proposed closed-form initialization for RNNs with different activation functions. It would be worthwhile to include a comparison to that method. In experiment section 2.3.1, it would be helpful to include comparison of performance of chain with feedback using different beta values to confirm the intuition that stronger feedback strength would negatively impact the memory. Results in section 2.3.2 Table 1 are not exactly align with the story. Do the authors have any intuition on why the chain with feedback perform better than the chain variant.
ICLR
Title Transferring Optimality Across Data Distributions via Homotopy Methods Abstract Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis. In this work, we propose a novel homotopy-based numerical method that can be used to gradually transfer optimized parameters of a neural network across different data distributions. This method generalizes the widely-used heuristic of pre-training parameters on one dataset and then fine-tuning them on another dataset of interest. We conduct a theoretical analysis showing that, under some assumptions, the homotopy method combined with Stochastic Gradient Descent (SGD) is guaranteed to converge in expectation to an rθ-optimal solution for a target task when started from an expected rθ-optimal solution on a source task. Empirical evaluations on a toy regression dataset and for transferring optimized parameters from MNIST to Fashion-MNIST and CIFAR-10 show substantial improvement of the numerical performance over random initialization and pre-training. 1 INTRODUCTION Homotopy methods (Allgower & Georg, 1980), also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis (e.g., Tran-Dinh et al. (2012), Zanelli et al. (2019)). The core idea consists in sequentially solving a series of parametric problems, starting from an easy-to-solve problem and progressively deforming it, via a homotopy function, to the target one. Homotopy methods are suitable to solve complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In addition, in contrast to state-of-the-art algorithms in deep learning (e.g., Bottou (2010), Duchi et al. (2011), Kingma & Ba (2015)), these methods often achieve global convergence guarantees by only exploiting local structures of the problem. Concepts, such as curriculum-learning and warm-starting, that are related to different degrees to homotopy methods, have been explored both in the deep learning (e.g., Gulcehre et al. (2016), Mobahi (2016), Gulcehre et al. (2017)) and in the reinforcement learning (e.g., Narvekar (2017)) communities. In this work, we propose a novel homotopy-based numerical method to transfer knowledge regarding the localization of a minimizer across different task distributions in deep learning. This method gradually tracks a neural network’s (close-to-)optimal parameters from one data distribution to another one via the homotopy method (Allgower & Georg, 1980) and can be interpreted as a generalization of the very common heuristic of fine-tuning a pre-trained network. After discussing related work (Section 2) and background on homotopy methods (Section 3), our contributions are as follows: 1. We provide a general theoretical analysis of the homotopy method when using SGD as an iterative solver, proving that under some local assumptions it tracks in expectation an rθ-optimal solution from the source task to the target task (Section 4). 2. We introduce homotopy functions for transferring optimality across data distributions for supervised regression and classification tasks (Section 5). 3. For a toy regression dataset and for transferring optimized parameters from MNIST to Fashion-MNIST and from MNIST to CIFAR-10, we show that our method obtains up to two orders of magnitude better numerical performance than random initialization and substantial improvement of the numerical performance over pre-training (Section 6). 2 RELATED WORK Deep neural networks have led to establish a new state-of-the-art in many applications. Despite their great success and the many theoretical studies that have been published in the last years (e.g., Balduzzi et al. (2017), Li et al. (2018), Feizi et al. (2018), Kunin et al. (2019)), training these deep models remains a big challenge. Various stochastic optimization algorithms (e.g., Duchi et al. (2011), Kingma & Ba (2015), Reddi et al. (2018)) and initialization heuristics (e.g., Daniely et al. (2016), Klambauer et al. (2017), Hanin & Rolnick (2018)) have been recently suggested in order to improve and speed up the training procedure. We now briefly discuss the state-of-the-art deep learning optimization techniques and initialization strategies that are most related with the proposed homotopy-based method, drawing connections with existing and ongoing research works in the field. Curriculum Learning. First introduced by Bengio et al. (2009) and then extended in different works (e.g., Graves et al. (2017), Weinshall et al. (2018), Hacohen & Weinshall (2019)), curriculum learning can also be listed among the optimization heuristics proposed to alleviate the complexity of solving high dimensional and non-convex problems. In particular, taking inspiration from the fact that humans and animals learn “better” when exposed to progressively more complex situations in an organized manner, curriculum learning techniques guide the training by starting with “easy-tolearn” samples and progressively introducing more “complex-to-learn” ones. This guided learning process can also be rephrased in a homotopy-like fashion (see Algorithm 1) as solving a sequence of optimization problems where the target training distribution gradually changes from considering only the “easy” examples to the full original training distribution. Meta-Learning and Transfer-Learning. Due to the massive amount of computational resources required by the development of modern deep learning applications, the community has started to explore the possibility of re-using learned parameters across different tasks, leading to the development of many new transfer-learning (e.g., Rohrbach et al. (2013), Wang & Schneider (2014), Cui et al. (2019)) and meta-learning (e.g., Schmidhuber (1987), Hochreiter et al. (2001), Finn et al. (2017), Zintgraf et al. (2019)) algorithms. The simplest way to transfer knowledge across different tasks consists in using warm-start initialization. This heuristic is amply used in computer vision applications, where it is also known as the fine-tuning technique (e.g., Krizhevsky et al. (2012), Yosinski et al. (2014), Reyes et al. (2015), Käding et al. (2016)). So far, there is no rigorous explanation of why and when fine-tuning works. However, numerous empirical evaluations on different benchmarks show that warm-starting the parameters of deep models often leads to faster convergence and better generalization than using random initialization. 3 BACKGROUND In this work, we will focus on solving problems of the form θ∗ ∈ arg min θ∈Rd 1 N N∑ j=1 `j(θ)︸ ︷︷ ︸ :=J(θ) , (1) where J : Rd → R is our target objective function and θ∗ is a minimizer. Problems as described in (1) arise, for instance, in classification and regression scenarios. In the following section we briefly review the main concepts of homotopy and continuation methods, which the proposed technique to solve problem (1) is based on. 3.1 HOMOTOPIC FUNCTIONS AND CONTINUATION METHODS FOR OPTIMIZATION Given two topological spaces Z and Y , a homotopy is a continuous deformation between two continuous functions g, f : Z → Y that fulfills certain properties. We can formalize this concept with the following definition Definition 3.1. Let g, f : Z → Y be continuous maps on the topological spaces Z, Y . A homotopy from g to f is a continuous function H : Z × [0, 1]→ Y such that H(z, 0) = g(z) , H(z, 1) = f(z) , ∀z ∈ Z . (2) If such function H exists, g is said to be homotopic of f , and this relation is denoted by g ' f . It is straightforward to show that, A ⊆ Rn being a convex set, any two continuous maps g, f : Z → A are homotopic (see (Suciu, 2016) for a derivation). From this fact it follows that any two continuous and real functions are homotopic. See Figures 4a– 4b in the appendix for a graphical representation of two different homotopy maps between the probability density functions of two Gaussian distributions, where λ ∈ [0, 1] denotes the homotopy parameter. See also Section A in the appendix for details on some of the main properties of homotopic functions. Continuation methods (also known as homotopy methods) are a widely used mathematical tool to solve complex non-convex optimization problems where no or only very limited prior knowledge regarding the localization of optimal solutions is available (see (Allgower & Georg, 1980) for a full characterization of continuation methods). The core idea of a homotopy approach consists in defining a homotopy function H(θ, λ) with λ ∈ [0, 1] such that H(θ, 0) = J0(θ) is a trivial to optimize smooth map (or a smooth map of which a surrogate θ0 of an optimal solution is available) and H(θ, 1) = J(θ) is our target objective function. Instead of directly addressing problem (1), we approximately and sequentially solve γ > 0 parametric optimization problems of the form θ∗i ∈ arg min θ∈Rd 1 N N∑ j=1 `j(θ, λi)︸ ︷︷ ︸ :=H(θ,λi) , (3) for increasing values of the parameter λi for i = 1, . . . , γ and warm-starting each problem with the previously derived approximate solution. Conceptually, Algorithm 1 describes the basic steps of a general homotopy algorithm. Under appropriate assumptions, if the increment ∆λ is sufficiently small, then the iterative procedure in Algorithm 1 will converge to a neighborhood of an optimal solution of the target objective J that depends in some sense on the number of iterations k > 0 performed (Allgower & Georg, 1980). Many different variations of Algorithm 1 exist. In particular, Algorithm 1 A Conceptual Homotopy Algorithm 1: θ0 ≈ θ∗0 ∈ arg minθH(θ, 0) 2: γ > 0 , γ ∈ Z 3: λ0 = 0, ∆λ = 1/γ 4: k > 0 , k ∈ Z 5: for i = 1, . . . , γ do 6: λi ← λi−1 + ∆λ 7: procedure θi ←ITERATIVESOLVER(θi−1, k,H(θ, λi)) 8: return θγ different update schemes for the homotopy parameter can be adopted (e.g., geometric or sublinear rate of increase), various iterative solvers can be used under distinct and specific assumptions, and, finally, also diverse levels of approximation for the solutions θ∗i can be considered, i.e. different k values. Before going into the details of two concrete formulations of the conceptual homotopy method outlined in Algorithm 1 (see Section 5) when applied to transfer optimality knowledge in regression and classification scenarios, we provide a general theoretical analysis in a simplified setting. 4 THEORETICAL ANALYSIS In this section, we provide a local theoretical analysis of homotopy methods when Stochastic Gradient Descent (SGD) (Bottou, 2010) is used as iterative solver in Algorithm 1. The locality of the analysis consists in the definition of hyperspheres of radius B ≥ 0 around the optimal solutions of each homotopy problem H(θ, λi) where it is possible to exploit certain structures of the problem. In particular, we approximately and sequentially solve γ > 0 unconstrained optimization problems of the form θ∗i ∈ arg min θ∈Rd H(θ, λi) , ∀i = 1, . . . , γ , (4) where H(θ, λi) fulfills the assumptions described in Section 4.1 and λi ∈ [0, 1]. Let θi be an approximate solution of the problem associated with parameter λi derived by applying k > 0 iterations of SGD (in the limit, k = 1) and also the starting point for the problem associated with parameter λi+1, ∀i = 1, . . . , γ − 1. In addition, let θ0 denote an approximate solution for the source task, i.e. λ0 = 0, that is used as initial point for the problem associated with λ1. In this section we characterize the maximum allowed variation of the homotopy parameter in order for the method to able to track in expectation an rθ-optimal solution from source to target task. 4.1 ASSUMPTIONS We now expose the fundamental assumptions for our general local theoretical analysis on which all the derivations in Sections 4.2 and 4.3 rely. In addition, throughout the analysis the `-functions in (3) are implicitly assumed to be differentiable in θ. We start by giving the definition of the regions around the optimal solutions of the homotopy problems where the analysis is conducted. Definition 4.1. Given θ∗i and B ≥ 0, let BB,θ∗i be the following set of vectors BB,θ∗i := {θ s.t. ‖θ − θ ∗ i ‖ ≤ B} , ∀i = 0, . . . , γ . Assumption 4.2 (local L-smoothness). Assume that there exists a constant L > 0 such that ‖∇θH(θ̃, λi)−∇θH(θ̂, λi)‖ ≤ L‖θ̃ − θ̂‖ , ∀θ̃, θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ . (5) Corollary 4.2.1. If H is locally L-smooth in θ, then the following inequality holds H(θ∗i , λi)−H(θ̂, λi) ≤ − 1 2L ‖∇θH(θ̂, λi)‖2 , ∀θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ. (6) Proof. See Lemma 1.1 in (Gower, 2018) for a proof. Assumption 4.3 (local µ-strong convexity). Assume that there exists µ > 0 such that H(θ̃, λi) ≥ H(θ̂, λi) +∇θH(θ̂, λi)T (θ̃− θ̂) + µ 2 ‖θ̃− θ̂‖2 , ∀θ̃, θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ . (7) Assumption 4.4 (bounded `-derivative). Assume that there exists ν > 0 such that ‖∇θ`j(θ̂, λi)‖ ≤ ν , ∀θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ, ∀j = 1, . . . , N. (8) Assumption 4.5 (local bounded “variance”). Let g(θ̂, λi) denote an unbiased estimate of the gradient ∇θH(θ̂, λi). Assume that there exists a constant C ≥ 0 such that the following bound on the expected squared norm of the estimate of the gradient holds E [ ‖g(θ̂, λi)‖2 ] ≤ C2 , ∀θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ. (9) Remark 4.6. Assumption 4.5 is standard for proving error bounds on SGD iterates (see (Schmidt, 2014)). In addition, notice that, since E [ ‖g(θ̂, λi)‖2 ] = Var ( ‖g(θ̂, λi)‖ ) + E [ ‖g(θ̂, λi)‖ ]2 , the C constant is proportional to the variance and the squared expected value of the norm of the gradient estimate. Therefore, it decreases when the iterates approach a minimizer and by reducing the noise in the estimate of the gradient. In the limit (i.e. exact gradient and convergence to a minimizer), C = 0. Recall that θ∗(λi) ≡ θ∗i . Assumption 4.7 (strong regularity). Assume that there exists δ > 0 such that the following inequality holds ‖θ∗(λi+1)− θ∗(λi)‖ ≤ δ|λi+1 − λi| , ∀i = 0, . . . , γ − 1. Remark 4.8. Assumption 4.7 follows directly from the application of the Implicit Function Theorem by introducing some milder assumptions on the problem structure (see Lemma 2.1.8 in (Allgower & Georg, 1980)). 4.2 FUNDAMENTAL THEORETICAL PRELIMINARIES Before proceeding with the main theoretical contributions, we extend the existing results in the literature on global error bounds for the iterates of Stochastic Gradient Descent such that they can be applied when the underlying assumptions are only required to hold locally. The derived local error bounds for SGD iterates are used in Proposition 4.11 and Theorem 4.12. Proposition 4.9. Let θi ∈ BB,θ∗i be the starting point for the problem described in (3), and let θi := θi,0 and θi+1 := θi,k denote the iterate after k > 0 SGD steps, where an SGD step is defined as θi,k = θi,k−1 − αg(θi,k−1, λi) . Under Assumptions 4.2– 4.5 and by setting the batch size 0 < M ≤ N to a value such that (N−M) N ≤ (1−κd) 2αν B with κd = √ (1− αµ) and the learning rate α to a constant value such that 0 < α ≤ min ( 1 2µ , 1 L ) , the following error bound on the iterates holds E [ ‖θi+1 − θ∗i+1‖2 ] ≤ (1− 2αµ)k · E [ ‖θi − θ∗i+1‖2 ] + αC2 2µ . (10) Proof. See Section D in the appendix. Remark 4.10. The expectation in (10) is taken w.r.t. all the random variables, i.e. estimates of the gradients and initial point θ0, involved in the optimization procedure up to the current i+1 iteration of the algorithm. 4.3 MAIN THEORETICAL CONTRIBUTIONS Under the considered assumptions and by exploiting the previously derived results on local error bounds for SGD iterates, we show that, if the approximate solution θi for the problem with parameter λi is “sufficiently close” to a minimizer θ∗i in expectation, i.e. E [ ‖θi − θ∗i ‖2 ] ≤ r2θ , then, for a “sufficiently small” change in the homotopy parameter, the same vicinity to a minimizer θ∗i+1 is preserved in expectation for the approximate solution θi+1 of the problem with parameter λi+1, i.e. E [ ‖θi+1 − θ∗i+1‖2 ] ≤ r2θ . In particular, with Theorem 4.12 we characterize the maximum allowed variation of the homotopy parameter based on the properties of the parametric problems and the convergence characteristics of the adopted iterative solver, i.e. rate of convergence and number of iterations. First, in order to apply the results derived in Theorem 4.12, given a realization of θi ∈ BB,θ∗i , we have to derive the conditions on ‖θi − θ∗i ‖ such that ‖θi − θ∗i+1‖ ≤ B. In addition, we derive the necessary conditions in order to apply these results recursively across the iterations of Algorithm 1. Proposition 4.11. Let θi ∈ BB,θ∗i and |λi − λi+1| ≤ , with 0 ≤ ≤ B δ . If ‖θi − θ ∗ i ‖ ≤ B − δ , then ‖θi − θ∗i+1‖ ≤ B. Moreover, let κd = √ (1− αµ) and assume that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , and ≤ 1 δ ( (1− κkd)B − (N −M) N 2αν (1− κd) ) . Then, after applying k iterations of SGD, we obtain that ‖θi+1 − θ∗i+1‖ ≤ B − δ . Proof. See Section E.1 in the appendix. See Figure 9 in the appendix for a graphical representation of the results derived in Proposition 4.11, where the continuous and dashed lines are used to represent the circles of radius B and B − δ , respectively. Theorem 4.12. Consider Algorithm 1 with Stochastic Gradient Descent as solver and let k > 0 be the number of iterations, 0 < α ≤ min ( 1 2µ , 1 L ) be the step size and 0 < M ≤ N be the batch size such that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , where κd = √ (1− αµ). For θ0 ∈ BB−δ ,θ∗0 and rθ ∈ R such that r2θ ≥ αC2 2µ , (11) then, if E [ ‖θi − θ∗i ‖2 ] ≤ r2θ and |λi − λi+1| ≤ ̃, where ̃ := min {̄, } with ̄ = −rθ δ + 1 δ √ r2θ − αC2/2µ (1− 2αµ)k , (12) the following inequality holds E [ ‖θi+1 − θ∗i+1‖2 ] ≤ r2θ . (13) Proof. See Section E.2 in the appendix. The results derived in Theorem 4.12 show that the homotopy method used in combination with SGD allows to track in expectation an rθ-optimal solution across the parametric problems for “small enough” variations of the homotopy parameter, i.e. ∆λ ≤ ̃. Notice that rθ can potentially be smaller than B − δ and has to be bigger than the radius of the “noise-dominant” hypersphere centered at the minimizers, i.e. r2θ ≥ αC 2 2µ . In particular, by exploiting the local structure of the parametric problems we derive the maximum allowed variation of the homotopy parameter across the iterations of Algorithm 1. The derived upper bound is inversely proportional to the strong regularity constant δ and depends on the number of iterations k performed with SGD, such that the more iterations we perform on each parametric problem the more we are allowed to change the homotopy parameter. Finally, notice that these results can be applied recursively across the parametric problems. 5 TRANSFERRING OPTIMALITY VIA HOMOTOPY METHODS In this section we describe a possible application of homotopy methods to solve supervised regression and classification tasks. We address the case where deep neural networks are used as models. We start by introducing the problem framework of supervised learning and then we propose two different homotopy functions for the regression and classification scenarios, respectively. 5.1 PROBLEM FORMULATION Despite the generality of the proposed methodology, in this work we specifically address the supervised learning framework, and, in particular, when the predictive model is constituted by a deep neural network f(x; θ) parameterized by θ ∈ Rd. In the supervised learning scenario, independently from the type of task t, we typically dispose of a training set Dt consisting of N pairs of examples (xj , yj). The goal of the learning process is to find a value of θ that minimizes an objective function which measures the discrepancy between the outputs produced by the network ŷ = f(x; θ) and the target outputs y. In particular, the learning process consists in minimizing the following empirical objective function J(θ) := 1 N ∑ (xj ,yj)∈Dt `(yj , f(xj ; θ)) , (14) whose non-convexity originates from the high non-convexity of our model f . In the classical setting, J is chosen based on the KL divergence between the target data distribution Qx,y , with density qx,y = q(y|x)q(x), and the learned data distribution Px,y(θ), with density px,y = p(y|x; θ)q(x), where p(y|x; θ) is modeled via a neural network, (Goodfellow et al., 2016). With the appropriate approximations, this leads to the following form for the objective function J(θ) = 1 N ∑ (xj ,yj)∈Dt q(y|x) log q(y|x) p(y|x; θ) . (15) 5.2 HOMOTOPY FUNCTIONS ACROSS DATA DISTRIBUTIONS Finding a value of θ that attains a local minimum of the objective function in (14) is often a hard optimization task, given the high dimensionality and non-convexity of the problem. In addition, prior knowledge regarding the localization of the solutions is rarely available. The complexity of minimizing such functions also depends in some non-trivial way on the task distribution Qx,y that is addressed (e.g., Ionescu et al. (2016), Zendel et al. (2017)). For some tasks, convergence to a good approximate solution is achieved after a few epochs, while for other tasks, orders of magnitude more iterations are required to reach the neighborhood of a solution. In this perspective, different heuristics have been recently proposed in the attempt of re-using across different data distributions the prior knowledge gained from approximately solving the learning problem associated with a certain task. The question whether we could exploit easy-to-solve or already-solved tasks to speed up and improve the learning of unsolved hard tasks arises. The method we propose in this paper addresses this question and attempts to do so by using a rigorous and well-established mathematical framework, with the goal of speeding up the learning process in presence of hard-to-solve tasks. In the perspective of homotopy methods, this goal can be achieved under some assumptions by defining a homotopy transformation between starting and target tasks and by following the procedure described in Algorithm 1. Despite the flexibility and generality of the method, with this work we only focus on homotopy deformations across different task distributions, but similar transformations can be applied in numerous different manners that are also worth exploring, e.g., progressively modifying the architecture of the network or the weights of the objective function terms. Let s be the source task with training data Ds of pairs (xs, ys) ∼ Qxs,ys whose good approximate solution θ∗s for the minimization of the objective in (14) is available (or cheaply computable), and let t denote the target task with training data Dt of pairs (xt, yt) ∼ Qxt,yt whose conditional distribution we aim to learn. We propose two different homotopy deformations from task s to task t for regression and classification, respectively. 5.2.1 SUPERVISED REGRESSION In the supervised regression scenario, by modeling the density of the conditional learned distribution as p(y|x; θ) = N ( y; f(x, θ), σ2 I ) and using the approximate KL divergence objective function described in (15), we recover the mean squared error as minimization criterion. The proposed homotopy deformation is based on the following equations yλ|x = (1− λ) ys|x+ λ yt|x , (16) p(yλ|x) = N (yλ ; f(x; θ), σ2 I) . (17) Notice that the transformation described in (16) preserves the unimodality of the conditional distribution (see caption of Figures 4a and 4b in the appendix), and, when used in combination with the objective function defined in Equation (15), leads to the minimization w.r.t. θ of H(θ, λ) := E(x,yλ) ‖(1− λ) (ys − f(x; θ)) + λ (yt − f(x; θ)) ‖ 2 . (18) See Figure 6a in the appendix for a graphical representation of this homotopy deformation when applied to gradually transform a one-dimensional sine wave function with a frequency of 1 radian into a one-dimensional sine wave function with a frequency of 137 radians. A downside of this homotopy deformation is that the same support for x is required (the absence of the subscripts s and t on x stands to indicate that the same realization for xs and xt has to be considered). Alternatively, it is possible to approximate (16) by using a Gaussian filter (see Figure 6b and Section B in the appendix). 5.2.2 SUPERVISED CLASSIFICATION In the case of supervised classification, by modeling the density of the conditional learned distribution as p(y|x; θ) = Multinoulli(y; f(x; θ)), and using the approximate KL divergence objective function described in (15), we recover the cross-entropy loss function, (Goodfellow et al., 2016). A possible homotopy deformation for the classification case consists in applying the following transformations xλ = (1− λ)xs + λxt , (19) yλ|xλ = (1− λ) ys|xs + λ yt|xt , (20) which corresponds to the use of probabilistic labels. See Figure 8 in the appendix for a graphical representation of the proposed homotopy deformation. The corresponding label vector for the deformed image represented in Figure 8b is y0.5 = [0, 0, 0.5, 0, 0, 0.5, 0, 0, 0, 0], given that λ = 0.5 and that the sampled realizations of xs and xt, represented in Figures 8a and 8c, belong to class 2 and 5, respectively. 6 EXPERIMENTAL EVALUATION In this section, we present some experimental evaluations of homotopy methods when applied to solve supervised regression and classification tasks. As homotopy functions we adopt the ones discussed in Section 5.2. We empirically show that homotopy methods outperform random and warm-start initialization schemes in terms of numerical performance. In particular, when the target task is complex and/or, in the transfer-learning scenario, when the data distributions are significantly different, continuation methods can achieve significant speed-up compared to random and warm-start initializations. We believe that their superior numerical performance relies on the use of homotopy functions that progressively deform the data distribution from an easy-to-solve or alreadysolved task to the target data distribution. In addition, consistently across all the benchmarks, our homotopy-based method shows faster convergence than random-initialization and faster or comparable convergence than warm-start initialization. When the source task is “similar” to the target one, there is indeed no need to gradually vary the λ parameter in Algorithm 1, but it suffices to directly set it to 1. In this extreme case, our homotopy method boils down to warm-start initialization. 6.1 REGRESSION For the supervised regression scenario, the problem we address is how to transfer “optimality knowledge” across two tasks that involve regressing from the input to the output of two sine wave functions with different values of phase ω. Each considered dataset has 10000 samples split across training and testing, where x and y are defined as follows x ∼ U(0, 1) , y = sin(ωx) + ε , ε ∼ N (0, 0.01) . (21) The goal is to start with an “easy-to-learn” task, i.e. ω ≈ 1 rad, whose optimum is available by performing only few epochs with a first-order optimizer, e.g. SGD, Adam, and progressively transfer the “optimality knowledge” to a more complex task, i.e. ω >> 1 rad, by approximately solving the homotopy problems for increasing values of λ as described in Algorithm 1. We set ω = 1 rad for our source task distribution, and study the performance of the proposed approach with homotopy function as described in Equation (16) for different target distributions with ω >> 1 rad. See Figures 5a and 5b in the appendix for a visualization of the source data distribution with ω = 1 rad and the target data distribution when ω = 137 rad, respectively. The regressor is a feedforward neural network with 6 hidden layers of 100 units each and relu as activation function. In order to make the experiments more robust with respect to the choice of the step size α, we use Adam as optimizer. For the experiments in Figures 1a–1b, Figures 7a–7b in the appendix, and Figure 2a, we set α = 0.001, γ = 10, k = 200 and then performed an additional 500 epochs on the final target problem, while for the experiments in Figure 2b, we set γ = 10, k = 300 and performed an additional 600 epochs on the final target problem. In this last scenario we set α = 0.001 and then decrease it with a cosine annealing schedule to observe convergence to an optimum. As shown in Figures 1a–1b, Figures 7a–7b in the appendix, and Figures 2a and 2b, the homotopy method leads to faster convergence than the considered baselines by preserving the vicinity to an optimal solution for problems H(θ, λ) across the different λ values. In particular, we achieve a training loss up to two orders of magnitude better than the considered baselines. 6.2 CLASSIFICATION For the supervised classification scenario, we first apply the continuation method with the homotopy deformation described in Equations (19) and (20) in order to transfer optimality from the MNIST task, a notoriously “easy-to-learn” task for neural networks, to the FashionMNIST task. Since the two datasets have the same input dimensionality and the same number of classes, no additional preprocessing of the data is required. As network architecture, we use a VGG-type network, (Simonyan & Zisserman, 2015), and Adam as optimizer with a step size of α = 0.001. Secondly, we consider CIFAR-10 as target data distribution. Differently from the previous scenario, padding of the MNIST samples is required in order to apply Equation (19). The MNIST samples are also replicated across three channels. Also in this case we adopt a VGG-type network, (Simonyan & Zisserman, 2015), and Adam as optimizer with a step size of α = 0.0001. As shown in Figures 3a and 3b, in both benchmarks the homotopy method leads to faster convergence than random initialization. While in the second benchmark our method reaches a lower value of training loss in fewer epochs than warm-start, in the MNIST-to-FashionMNIST case the performance is comparable to using warm-start initialization. A possible interpretation is that, when the source and target task distributions are “too similar”, as we hypothesize in the MNIST- to-FashionMNIST scenario, then there is no need for homotopy deformations to be applied, i.e. 0 < λ < 1, but we can directly apply λ = 1 in our scheme, which corresponds to simply using warm-start initialization. 7 CONCLUSIONS In this paper we propose a new methodology based on homotopy methods in order to transfer knowledge across different task distributions. In particular, our homotopy-based method allows one to exploit easy-to-solve or already-solved learning problems to solve new and complex tasks, by approximately and sequentially solving a sequence of optimization problems where the task distribution is gradually deformed from the source to the target one. We conduct a theoretical analysis of a general homotopy method in a simplified setting, and then we test our method on some popular deep learning benchmarks, where it shows superior numerical performance compared to random and warm-start initialization schemes. The proposed framework, in its limiting case, corresponds to the widely used fine-tuning heuristic, allowing for a new and more rigorous interpretation of the latter. Finally, the generality of homotopy methods also opens many novel and promising research directions in fundamental fields for deep learning, such as stochastic non-convex optimization and transfer-learning. ACKNOWLEDGMENTS This work has partly been supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant no. 716721 as well as by the German Federal Ministry for Economic Affairs and Energy (BMWi) via DyConPV (0324166B), and by DFG via Research Unit FOR 2401. In addition, Q. Tran-Dinh has partly been supported by the National Science Foundation (NSF), grant. no. 1619884. The authors thank Stefan Falkner for his helpful suggestions and comments. A PROPERTIES OF HOMOTOPIC FUNCTIONS Among the numerous properties of homotopic functions, we recall the following ones Proposition A.1. Suppose that there exists a homotopy H : Z × [0, 1]→ Y from g to f , i.e. g ' f . Then • g ' g (reflexive property) • g ' f =⇒ f ' g (symmetric property) • g ' f and f ' h =⇒ g ' h (transitive property) Proof. See proof of Theorem 1.5 in (Suciu, 2016). Proposition A.2. Let g, g′ : Z → Y and f, f ′ : Y →W be continuous maps, and let f ◦ g, f ′ ◦ g′ : Z →W be the respective composite maps. If g ' g′ and f ' f ′, then f ◦ g ' f ′ ◦ g′. Proof. See proof of Proposition 1.7 in (Suciu, 2016). B APPROXIMATION VIA GAUSSIAN FILTER For the supervised regression scenario, we propose the following homotopy deformation yλ|x = λ ys|x+ (1− λ) yt|x . (22) A downside of this homotopy function is that the same support for x is required (the absence of the subscripts s and t on x stands to indicate that the same realization for xs and xt has to be considered). Alternatively, it is possible to approximate Equation (22) by using a Gaussian filter, as depicted in Figure 6b. In particular, having sampled one realization z of the pair (xs, ys) from the training set Ds, 0 < MGF ≤ N realizations of the pair (xt, yt) are sampled from Dt. Each yt,j realization is then weighted based on the vicinity of xt,j to the sampled xs,z realization. This leads to the following approximation of the z realization of yλ yλ,z = (1− λ) ys,z + λ MGF MGF∑ j=1 wj yt,j , (23) wj = 1√ 2πξ2 exp ( −||xs,z − xt,j || 2 2ξ2 ) , (24) where ξ > 0 is the standard deviation of the Gaussian filter. C ADDITIONAL FIGURES D LOCAL ERROR BOUNDS FOR SGD ITERATES Before proving local error bounds for SGD iterates in the considered framework, given the local nature of our assumptions, we need to demonstrate two important facts, on which the proof relies. In particular, we need to show: • local linear contraction of Gradient Descent (GD) iterates, and that • starting in a hypersphere of radius B around a minimizer and given a “big enough” batch size, the next SGD iterate is also contained in this region for all possible realizations of the gradient estimate. Considering problem (4) with fixed parameter λi, in the following subsections we will refer to θ∗ = θ∗i , θk = θi,k and gk = g(θk, λi), where we drop the subscript i and the explicit dependence on λi in order to simplify the notation. The analysis holds for all fixed parameters λi. D.1 LOCAL LINEAR CONTRACTION OF GD ITERATES Let us use GD to solve the following optimization problem θ∗ ∈ arg min θ H(θ, λi) , where the objective function H fulfills Assumptions 4.2 and 4.3. We now derive error bounds on the iterates of GD θk+1 = θk − α∇θH(θk, λi) , where θk ∈ BB,θ∗ and 0 < α ≤ 1L is the step size. We start by applying the definition of GD iterates and then we exploit the introduced assumptions ‖θk+1 − θ∗‖2 = ‖θk − α∇θH(θk, λi)− θ∗‖2 = ‖θk − θ∗‖2 − 2α∇θH(θk, λi)T (θk − θ∗) + α2‖∇θH(θk, λi)‖2 strong convexity ≤ (1− αµ)‖θk − θ∗‖2 − 2α(H(θk, λi)−H(θ∗, λi)) + α2‖∇θH(θk, λi)‖2 corollary 4.2.1 ≤ (1− αµ)‖θk − θ∗‖2 − 2α(1− αL)(H(θk, λi)−H(θ∗, λi)) . Since H(θk, λi)−H(θ∗, λi) ≥ 0 and −2α(1− αL) ≤ 0 when 0 < α ≤ 1L , we can safely drop the second term and obtain the final result ‖θk+1 − θ∗‖2 ≤ (1− αµ)‖θk − θ∗‖2. See also Theorem 2.3 in (Gower, 2018) for a derivation where Assumptions 4.2 and 4.3 are required to hold globally. D.2 REALIZATION OF THE SGD ITERATES IN THE STRONG CONVEXITY AND L-SMOOTHNESS REGION AROUND A MINIMIZER We address the following optimization problem θ∗ ∈ arg min θ 1 N N∑ j=1 `j(θ, λi)︸ ︷︷ ︸ :=H(θ,λi) , where H fulfills Assumptions 4.2– 4.4. As proved in Section D.1, under Assumptions 4.2 and 4.3, whenever θ0 ∈ BB,θ∗ and 0 < α ≤ 1L , deterministic gradient descent iterates converge linearly with contraction rate κd := √ (1− αµ). In particular, the following inequality holds ‖θDk+1 − θ∗‖ ≤ κd · ‖θk − θ∗‖ , for any θk such that ‖θk− θ∗‖ ≤ B, and superscript D denotes iterates obtained by applying the full gradient∇Hk := ∇H(θk, λi) θDk+1 = θk − α∇Hk . Let θk+1 denote the iterate obtained by applying one iteration of stochastic gradient descent θk+1 = θk − αgk , where gk := 1M ∑ j∈M∇`j(θk, λi) and M is a set of 0 < M ≤ N indexes randomly sampled from N = {1, . . . , N}. Given any realization of θk s.t. ‖θk − θ∗‖ ≤ B and any realization of gk, by exploiting Assumption 4.4 and the results derived in Section D.1, we have that ‖θk+1 − θ∗‖ = ‖θk − αgk − θ∗‖ = ‖θk − α∇Hk + α∇Hk − αgk − θ∗‖ ≤ ‖θk − α∇Hk − θ∗‖+ α‖∇Hk − gk‖ = ‖θk − α∇Hk − θ∗‖+ α ∥∥∥ 1 N ∑ j∈N\M ∇`j + 1 N ∑ j∈M ∇`j − 1 M ∑ j∈M ∇`j ∥∥∥ = ‖θk − α∇Hk − θ∗‖+ α ∥∥∥ 1 N ∑ j∈N\M ∇`j + M −N NM ∑ j∈M ∇`j ∥∥∥ ≤ ‖θk − α∇Hk − θ∗‖+ α 1 N ∑ j∈N\M ‖∇`j‖+ N −M NM ∑ j∈M ‖∇`j‖ ≤ ‖θDk+1 − θ∗‖+ 2α (N −M) N ν ≤ κd‖θk − θ∗‖+ 2α (N −M) N ν . (25) Since we have assumed that the current realization of θk lies in the hypersphere of radius B around the optimal solution θ∗, by solving for N−MN the following inequality κdB + 2α (N −M) N ν ≤ B , we obtain that, whenever (N−M)N ≤ (1−κd) 2αν B, the realization of θk+1 will also lie in this region. These derivations show that when the realization of the current iterate θk lies in the hypersphere of radius B around the minimizer θ∗, and (N−M)N ≤ (1−κd) 2αν B, then the next iterate θk+1 will also lie in this region. Consequently, in our scenario, if we assume that the initial point θ0 lies in the hypersphere of radius B around the minimizer θ∗, then, by applying the derivations recursively, we can show that the iterates will remain in this local region around the minimizer where strong convexity and smoothness hold. D.3 PROOF OF PROPOSITION 4.9 Let us use SGD to solve the following optimization problem θ∗ ∈ arg min θ H(θ, λi) , where the objective function H fulfills Assumptions 4.2– 4.4. We now derive error bounds for the iterates of SGD θk+1 = θk − αgk , where gk is the unbiased estimate of ∇Hk defined in the previous section and fulfills Assumption 4.5, θk ∈ BB,θ∗ , 0 < α ≤ min ( 1 2µ , 1 L ) is the step size and the batch size is set to a value M such that (N−M)N ≤ (1−κd) 2αν B. We start by applying the definition of SGD iterates ‖θk+1 − θ∗‖2 SGD iterate = ‖θk − αgk − θ∗‖2 = ‖θk − θ∗‖2 − 2αgTk (θk − θ∗) + α2‖gk‖2 . We now take the expectation w.r.t. θ0, g0, . . . , gk−1, gk and, considering Assumptions 4.2- 4.5, we obtain the following series of inequalities Eθ0,g0,...,gk−1,gk [ ‖θk+1 − θ∗‖2 ] = Eθ0,g0,...,gk−1,gk [ ‖θk − θ∗‖2 − 2αgTk (θk − θ∗) +α2‖gk‖2 ] law of iterated expectations = Eθ0,g0,...,gk−1 [ Egk [ ‖θk − θ∗‖2 −2αgTk (θk − θ∗) + α2‖gk‖2 | θ0, g0, . . . , gk−1 ]] unbiased gk+bounded “variance” ≤ Eθ0,g0,...,gk−1 [ ‖θk − θ∗‖2 −2α∇HTk (θk − θ∗) ] + α2C2 strong convexity ≤ (1− 2αµ) · Eθ0,g0,...,gk−1 [ ‖θk − θ∗‖2 ] + α2C2 . By applying this result recursively, we derive the following bound on the error for the SGD iterates Eθ0,g0,...,gk−1,gk [ ‖θk+1 − θ∗‖2 ] ≤ (1− 2αµ)k+1 · Eθ0 [ ‖θ0 − θ∗‖2 ] + αC2 2µ . See also Section 3 in (Schmidt, 2014) for a derivation where Assumptions 4.2 and 4.3 are required to hold globally. E MAIN THEORETICAL CONTRIBUTIONS E.1 PROOF OF PROPOSITION 4.11 Proposition E.1. Let θi ∈ BB,θ∗i and |λi − λi+1| ≤ , with 0 ≤ ≤ B δ . If ‖θi − θ ∗ i ‖ ≤ B − δ , then ‖θi − θ∗i+1‖ ≤ B. Moreover, let κd = √ (1− αµ) and assume that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , and ≤ 1 δ ( (1− κkd)B − (N −M) N 2αν (1− κd) ) . Then, after applying k iterations of SGD, we obtain that ‖θi+1 − θ∗i+1‖ ≤ B − δ . Proof. ‖θi − θ∗i+1‖ = ‖θi − θ∗i + θ∗i − θ∗i+1‖ Triangle Ineq. ≤ ‖θi − θ∗i ‖+ ‖θ∗i − θ∗i+1‖ Assumption 4.7 ≤ ‖θi − θ∗i ‖+ δ|λi − λi+1| . Finally, using the fact that |λi − λi+1| ≤ , it follows that, if ‖θi − θ∗i ‖ ≤ B − δ with 0 ≤ ≤ Bδ , then ‖θi − θ∗i+1‖ ≤ B. We now derive the conditions on such that ‖θi+1 − θ∗i+1‖ ≤ B − δ . By applying recursively the results derived in Section D.2 (25), we obtain that ‖θi+1 − θ∗i+1‖ ≤ κkd‖θi − θ∗i+1‖+ 2α (N −M) N ν k−1∑ i=0 κid . By using the limit of the geometric series, we have that ‖θi+1 − θ∗i+1‖ ≤ κkd‖θi − θ∗i+1‖+ (N −M) N 2αν (1− κd) . Finally, by considering that ‖θi − θ∗i+1‖ ≤ B and by solving in the following inequality κkdB + (N −M) N 2αν (1− κd) ≤ B − δ , we obtain the following upper bound on ≤ 1 δ ( (1− κkd)B − (N −M) N 2αν (1− κd) ) , from which also the extra condition on the batch size (N −M) N ≤ (1− κ k d)(1− κd)B 2αν . Figure 9: Graphical representation of the results derived in Proposition 4.11. The continuous and dashed lines are used to represent the circles of radius B and B − δ around the optimal solutions, respectively. E.2 PROOF OF THEOREM 4.12 Theorem E.2. Consider Algorithm 1 with Stochastic Gradient Descent as solver and let k > 0 be the number of iterations, 0 < α ≤ min ( 1 2µ , 1 L ) be the step size and 0 < M ≤ N be the batch size such that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , where κd = √ (1− αµ). For θ0 ∈ BB−δ ,θ∗0 and rθ ∈ R such that r2θ ≥ αC2 2µ , (26) then, if E [ ‖θi − θ∗i ‖2 ] ≤ r2θ and |λi − λi+1| ≤ ̃, where ̃ := min {̄, } with ̄ = −rθ δ + 1 δ √ r2θ − αC2/2µ (1− 2αµ)k , (27) the following inequality holds E [ ‖θi+1 − θ∗i+1‖2 ] ≤ r2θ . (28) Proof. E [ ‖θi+1 − θ∗i+1‖2 ] Ineq. 10 ≤ (1− 2αµ)kE [ ‖θi − θ∗i+1‖2 ] + αC2 2µ = (1− 2αµ)kE [ ‖θi − θ∗i + θ∗i − θ∗i+1‖2 ] + αC2 2µ Triangle Ineq. ≤ (1− 2αµ)kE [( ‖θi − θ∗i ‖+ ‖θ∗i − θ∗i+1‖ )2] + αC2 2µ = (1− 2αµ)kE [( ‖θi − θ∗i ‖2 + ‖θ∗i − θ∗i+1‖2 +2‖θi − θ∗i ‖‖θ∗i − θ∗i+1‖ )] + αC2 2µ Assumption 4.7 ≤ (1− 2αµ)kE [( ‖θi − θ∗i ‖2 + δ2|λi − λi+1|2 +2δ‖θi − θ∗i ‖|λi − λi+1|)] + αC2 2µ ≤ (1− 2αµ)k ( δ2̃2 + 2δrθ ̃+ r 2 θ ) + αC2 2µ . We now solve in ̃ the following second degree inequality (1− 2αµ)k ( δ2̃2 + 2δrθ ̃+ r 2 θ ) + αC2 2µ ≤ r2θ . (29) The inequality (29) admits solutions if and only if r2θ ≥ αC 2 2µ . In particular, inequality (29) holds ∀̃ ∈ [0, ̄], where ̄ = − rθδ + 1 δ √ r2θ−αC2/2µ (1−2αµ)k . F EXPERIMENTAL EVALUATION: TEST PERFORMANCES F.1 REGRESSION F.2 CLASSIFICATION
1. What is the main contribution of the paper in the field of deep learning? 2. What are the weaknesses of the paper regarding its presentation and focus? 3. How can the authors improve the accessibility of their work? 4. Are there any specific areas where the authors could expand or provide more detail? 5. Does the reviewer believe the paper meets the standards for publication in a top-tier machine learning conference like ICLR?
Review
Review Authors propose a very general framework of Homotopy to the deep learning set up and explores a few relevant theoretical issues. Though the proposed idea is interesting, the depth and breadth of authors' presentation are simply lacking. The entire paper lacks focus and I suggest authors consider focusing on 1-2 well thought-out ideas. There are many 3-4 line long sentences that are hard to decipher. Please also consider making the presentation more accessible. Overall, this paper does not meet the bar for ICLR.
ICLR
Title Transferring Optimality Across Data Distributions via Homotopy Methods Abstract Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis. In this work, we propose a novel homotopy-based numerical method that can be used to gradually transfer optimized parameters of a neural network across different data distributions. This method generalizes the widely-used heuristic of pre-training parameters on one dataset and then fine-tuning them on another dataset of interest. We conduct a theoretical analysis showing that, under some assumptions, the homotopy method combined with Stochastic Gradient Descent (SGD) is guaranteed to converge in expectation to an rθ-optimal solution for a target task when started from an expected rθ-optimal solution on a source task. Empirical evaluations on a toy regression dataset and for transferring optimized parameters from MNIST to Fashion-MNIST and CIFAR-10 show substantial improvement of the numerical performance over random initialization and pre-training. 1 INTRODUCTION Homotopy methods (Allgower & Georg, 1980), also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis (e.g., Tran-Dinh et al. (2012), Zanelli et al. (2019)). The core idea consists in sequentially solving a series of parametric problems, starting from an easy-to-solve problem and progressively deforming it, via a homotopy function, to the target one. Homotopy methods are suitable to solve complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In addition, in contrast to state-of-the-art algorithms in deep learning (e.g., Bottou (2010), Duchi et al. (2011), Kingma & Ba (2015)), these methods often achieve global convergence guarantees by only exploiting local structures of the problem. Concepts, such as curriculum-learning and warm-starting, that are related to different degrees to homotopy methods, have been explored both in the deep learning (e.g., Gulcehre et al. (2016), Mobahi (2016), Gulcehre et al. (2017)) and in the reinforcement learning (e.g., Narvekar (2017)) communities. In this work, we propose a novel homotopy-based numerical method to transfer knowledge regarding the localization of a minimizer across different task distributions in deep learning. This method gradually tracks a neural network’s (close-to-)optimal parameters from one data distribution to another one via the homotopy method (Allgower & Georg, 1980) and can be interpreted as a generalization of the very common heuristic of fine-tuning a pre-trained network. After discussing related work (Section 2) and background on homotopy methods (Section 3), our contributions are as follows: 1. We provide a general theoretical analysis of the homotopy method when using SGD as an iterative solver, proving that under some local assumptions it tracks in expectation an rθ-optimal solution from the source task to the target task (Section 4). 2. We introduce homotopy functions for transferring optimality across data distributions for supervised regression and classification tasks (Section 5). 3. For a toy regression dataset and for transferring optimized parameters from MNIST to Fashion-MNIST and from MNIST to CIFAR-10, we show that our method obtains up to two orders of magnitude better numerical performance than random initialization and substantial improvement of the numerical performance over pre-training (Section 6). 2 RELATED WORK Deep neural networks have led to establish a new state-of-the-art in many applications. Despite their great success and the many theoretical studies that have been published in the last years (e.g., Balduzzi et al. (2017), Li et al. (2018), Feizi et al. (2018), Kunin et al. (2019)), training these deep models remains a big challenge. Various stochastic optimization algorithms (e.g., Duchi et al. (2011), Kingma & Ba (2015), Reddi et al. (2018)) and initialization heuristics (e.g., Daniely et al. (2016), Klambauer et al. (2017), Hanin & Rolnick (2018)) have been recently suggested in order to improve and speed up the training procedure. We now briefly discuss the state-of-the-art deep learning optimization techniques and initialization strategies that are most related with the proposed homotopy-based method, drawing connections with existing and ongoing research works in the field. Curriculum Learning. First introduced by Bengio et al. (2009) and then extended in different works (e.g., Graves et al. (2017), Weinshall et al. (2018), Hacohen & Weinshall (2019)), curriculum learning can also be listed among the optimization heuristics proposed to alleviate the complexity of solving high dimensional and non-convex problems. In particular, taking inspiration from the fact that humans and animals learn “better” when exposed to progressively more complex situations in an organized manner, curriculum learning techniques guide the training by starting with “easy-tolearn” samples and progressively introducing more “complex-to-learn” ones. This guided learning process can also be rephrased in a homotopy-like fashion (see Algorithm 1) as solving a sequence of optimization problems where the target training distribution gradually changes from considering only the “easy” examples to the full original training distribution. Meta-Learning and Transfer-Learning. Due to the massive amount of computational resources required by the development of modern deep learning applications, the community has started to explore the possibility of re-using learned parameters across different tasks, leading to the development of many new transfer-learning (e.g., Rohrbach et al. (2013), Wang & Schneider (2014), Cui et al. (2019)) and meta-learning (e.g., Schmidhuber (1987), Hochreiter et al. (2001), Finn et al. (2017), Zintgraf et al. (2019)) algorithms. The simplest way to transfer knowledge across different tasks consists in using warm-start initialization. This heuristic is amply used in computer vision applications, where it is also known as the fine-tuning technique (e.g., Krizhevsky et al. (2012), Yosinski et al. (2014), Reyes et al. (2015), Käding et al. (2016)). So far, there is no rigorous explanation of why and when fine-tuning works. However, numerous empirical evaluations on different benchmarks show that warm-starting the parameters of deep models often leads to faster convergence and better generalization than using random initialization. 3 BACKGROUND In this work, we will focus on solving problems of the form θ∗ ∈ arg min θ∈Rd 1 N N∑ j=1 `j(θ)︸ ︷︷ ︸ :=J(θ) , (1) where J : Rd → R is our target objective function and θ∗ is a minimizer. Problems as described in (1) arise, for instance, in classification and regression scenarios. In the following section we briefly review the main concepts of homotopy and continuation methods, which the proposed technique to solve problem (1) is based on. 3.1 HOMOTOPIC FUNCTIONS AND CONTINUATION METHODS FOR OPTIMIZATION Given two topological spaces Z and Y , a homotopy is a continuous deformation between two continuous functions g, f : Z → Y that fulfills certain properties. We can formalize this concept with the following definition Definition 3.1. Let g, f : Z → Y be continuous maps on the topological spaces Z, Y . A homotopy from g to f is a continuous function H : Z × [0, 1]→ Y such that H(z, 0) = g(z) , H(z, 1) = f(z) , ∀z ∈ Z . (2) If such function H exists, g is said to be homotopic of f , and this relation is denoted by g ' f . It is straightforward to show that, A ⊆ Rn being a convex set, any two continuous maps g, f : Z → A are homotopic (see (Suciu, 2016) for a derivation). From this fact it follows that any two continuous and real functions are homotopic. See Figures 4a– 4b in the appendix for a graphical representation of two different homotopy maps between the probability density functions of two Gaussian distributions, where λ ∈ [0, 1] denotes the homotopy parameter. See also Section A in the appendix for details on some of the main properties of homotopic functions. Continuation methods (also known as homotopy methods) are a widely used mathematical tool to solve complex non-convex optimization problems where no or only very limited prior knowledge regarding the localization of optimal solutions is available (see (Allgower & Georg, 1980) for a full characterization of continuation methods). The core idea of a homotopy approach consists in defining a homotopy function H(θ, λ) with λ ∈ [0, 1] such that H(θ, 0) = J0(θ) is a trivial to optimize smooth map (or a smooth map of which a surrogate θ0 of an optimal solution is available) and H(θ, 1) = J(θ) is our target objective function. Instead of directly addressing problem (1), we approximately and sequentially solve γ > 0 parametric optimization problems of the form θ∗i ∈ arg min θ∈Rd 1 N N∑ j=1 `j(θ, λi)︸ ︷︷ ︸ :=H(θ,λi) , (3) for increasing values of the parameter λi for i = 1, . . . , γ and warm-starting each problem with the previously derived approximate solution. Conceptually, Algorithm 1 describes the basic steps of a general homotopy algorithm. Under appropriate assumptions, if the increment ∆λ is sufficiently small, then the iterative procedure in Algorithm 1 will converge to a neighborhood of an optimal solution of the target objective J that depends in some sense on the number of iterations k > 0 performed (Allgower & Georg, 1980). Many different variations of Algorithm 1 exist. In particular, Algorithm 1 A Conceptual Homotopy Algorithm 1: θ0 ≈ θ∗0 ∈ arg minθH(θ, 0) 2: γ > 0 , γ ∈ Z 3: λ0 = 0, ∆λ = 1/γ 4: k > 0 , k ∈ Z 5: for i = 1, . . . , γ do 6: λi ← λi−1 + ∆λ 7: procedure θi ←ITERATIVESOLVER(θi−1, k,H(θ, λi)) 8: return θγ different update schemes for the homotopy parameter can be adopted (e.g., geometric or sublinear rate of increase), various iterative solvers can be used under distinct and specific assumptions, and, finally, also diverse levels of approximation for the solutions θ∗i can be considered, i.e. different k values. Before going into the details of two concrete formulations of the conceptual homotopy method outlined in Algorithm 1 (see Section 5) when applied to transfer optimality knowledge in regression and classification scenarios, we provide a general theoretical analysis in a simplified setting. 4 THEORETICAL ANALYSIS In this section, we provide a local theoretical analysis of homotopy methods when Stochastic Gradient Descent (SGD) (Bottou, 2010) is used as iterative solver in Algorithm 1. The locality of the analysis consists in the definition of hyperspheres of radius B ≥ 0 around the optimal solutions of each homotopy problem H(θ, λi) where it is possible to exploit certain structures of the problem. In particular, we approximately and sequentially solve γ > 0 unconstrained optimization problems of the form θ∗i ∈ arg min θ∈Rd H(θ, λi) , ∀i = 1, . . . , γ , (4) where H(θ, λi) fulfills the assumptions described in Section 4.1 and λi ∈ [0, 1]. Let θi be an approximate solution of the problem associated with parameter λi derived by applying k > 0 iterations of SGD (in the limit, k = 1) and also the starting point for the problem associated with parameter λi+1, ∀i = 1, . . . , γ − 1. In addition, let θ0 denote an approximate solution for the source task, i.e. λ0 = 0, that is used as initial point for the problem associated with λ1. In this section we characterize the maximum allowed variation of the homotopy parameter in order for the method to able to track in expectation an rθ-optimal solution from source to target task. 4.1 ASSUMPTIONS We now expose the fundamental assumptions for our general local theoretical analysis on which all the derivations in Sections 4.2 and 4.3 rely. In addition, throughout the analysis the `-functions in (3) are implicitly assumed to be differentiable in θ. We start by giving the definition of the regions around the optimal solutions of the homotopy problems where the analysis is conducted. Definition 4.1. Given θ∗i and B ≥ 0, let BB,θ∗i be the following set of vectors BB,θ∗i := {θ s.t. ‖θ − θ ∗ i ‖ ≤ B} , ∀i = 0, . . . , γ . Assumption 4.2 (local L-smoothness). Assume that there exists a constant L > 0 such that ‖∇θH(θ̃, λi)−∇θH(θ̂, λi)‖ ≤ L‖θ̃ − θ̂‖ , ∀θ̃, θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ . (5) Corollary 4.2.1. If H is locally L-smooth in θ, then the following inequality holds H(θ∗i , λi)−H(θ̂, λi) ≤ − 1 2L ‖∇θH(θ̂, λi)‖2 , ∀θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ. (6) Proof. See Lemma 1.1 in (Gower, 2018) for a proof. Assumption 4.3 (local µ-strong convexity). Assume that there exists µ > 0 such that H(θ̃, λi) ≥ H(θ̂, λi) +∇θH(θ̂, λi)T (θ̃− θ̂) + µ 2 ‖θ̃− θ̂‖2 , ∀θ̃, θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ . (7) Assumption 4.4 (bounded `-derivative). Assume that there exists ν > 0 such that ‖∇θ`j(θ̂, λi)‖ ≤ ν , ∀θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ, ∀j = 1, . . . , N. (8) Assumption 4.5 (local bounded “variance”). Let g(θ̂, λi) denote an unbiased estimate of the gradient ∇θH(θ̂, λi). Assume that there exists a constant C ≥ 0 such that the following bound on the expected squared norm of the estimate of the gradient holds E [ ‖g(θ̂, λi)‖2 ] ≤ C2 , ∀θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ. (9) Remark 4.6. Assumption 4.5 is standard for proving error bounds on SGD iterates (see (Schmidt, 2014)). In addition, notice that, since E [ ‖g(θ̂, λi)‖2 ] = Var ( ‖g(θ̂, λi)‖ ) + E [ ‖g(θ̂, λi)‖ ]2 , the C constant is proportional to the variance and the squared expected value of the norm of the gradient estimate. Therefore, it decreases when the iterates approach a minimizer and by reducing the noise in the estimate of the gradient. In the limit (i.e. exact gradient and convergence to a minimizer), C = 0. Recall that θ∗(λi) ≡ θ∗i . Assumption 4.7 (strong regularity). Assume that there exists δ > 0 such that the following inequality holds ‖θ∗(λi+1)− θ∗(λi)‖ ≤ δ|λi+1 − λi| , ∀i = 0, . . . , γ − 1. Remark 4.8. Assumption 4.7 follows directly from the application of the Implicit Function Theorem by introducing some milder assumptions on the problem structure (see Lemma 2.1.8 in (Allgower & Georg, 1980)). 4.2 FUNDAMENTAL THEORETICAL PRELIMINARIES Before proceeding with the main theoretical contributions, we extend the existing results in the literature on global error bounds for the iterates of Stochastic Gradient Descent such that they can be applied when the underlying assumptions are only required to hold locally. The derived local error bounds for SGD iterates are used in Proposition 4.11 and Theorem 4.12. Proposition 4.9. Let θi ∈ BB,θ∗i be the starting point for the problem described in (3), and let θi := θi,0 and θi+1 := θi,k denote the iterate after k > 0 SGD steps, where an SGD step is defined as θi,k = θi,k−1 − αg(θi,k−1, λi) . Under Assumptions 4.2– 4.5 and by setting the batch size 0 < M ≤ N to a value such that (N−M) N ≤ (1−κd) 2αν B with κd = √ (1− αµ) and the learning rate α to a constant value such that 0 < α ≤ min ( 1 2µ , 1 L ) , the following error bound on the iterates holds E [ ‖θi+1 − θ∗i+1‖2 ] ≤ (1− 2αµ)k · E [ ‖θi − θ∗i+1‖2 ] + αC2 2µ . (10) Proof. See Section D in the appendix. Remark 4.10. The expectation in (10) is taken w.r.t. all the random variables, i.e. estimates of the gradients and initial point θ0, involved in the optimization procedure up to the current i+1 iteration of the algorithm. 4.3 MAIN THEORETICAL CONTRIBUTIONS Under the considered assumptions and by exploiting the previously derived results on local error bounds for SGD iterates, we show that, if the approximate solution θi for the problem with parameter λi is “sufficiently close” to a minimizer θ∗i in expectation, i.e. E [ ‖θi − θ∗i ‖2 ] ≤ r2θ , then, for a “sufficiently small” change in the homotopy parameter, the same vicinity to a minimizer θ∗i+1 is preserved in expectation for the approximate solution θi+1 of the problem with parameter λi+1, i.e. E [ ‖θi+1 − θ∗i+1‖2 ] ≤ r2θ . In particular, with Theorem 4.12 we characterize the maximum allowed variation of the homotopy parameter based on the properties of the parametric problems and the convergence characteristics of the adopted iterative solver, i.e. rate of convergence and number of iterations. First, in order to apply the results derived in Theorem 4.12, given a realization of θi ∈ BB,θ∗i , we have to derive the conditions on ‖θi − θ∗i ‖ such that ‖θi − θ∗i+1‖ ≤ B. In addition, we derive the necessary conditions in order to apply these results recursively across the iterations of Algorithm 1. Proposition 4.11. Let θi ∈ BB,θ∗i and |λi − λi+1| ≤ , with 0 ≤ ≤ B δ . If ‖θi − θ ∗ i ‖ ≤ B − δ , then ‖θi − θ∗i+1‖ ≤ B. Moreover, let κd = √ (1− αµ) and assume that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , and ≤ 1 δ ( (1− κkd)B − (N −M) N 2αν (1− κd) ) . Then, after applying k iterations of SGD, we obtain that ‖θi+1 − θ∗i+1‖ ≤ B − δ . Proof. See Section E.1 in the appendix. See Figure 9 in the appendix for a graphical representation of the results derived in Proposition 4.11, where the continuous and dashed lines are used to represent the circles of radius B and B − δ , respectively. Theorem 4.12. Consider Algorithm 1 with Stochastic Gradient Descent as solver and let k > 0 be the number of iterations, 0 < α ≤ min ( 1 2µ , 1 L ) be the step size and 0 < M ≤ N be the batch size such that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , where κd = √ (1− αµ). For θ0 ∈ BB−δ ,θ∗0 and rθ ∈ R such that r2θ ≥ αC2 2µ , (11) then, if E [ ‖θi − θ∗i ‖2 ] ≤ r2θ and |λi − λi+1| ≤ ̃, where ̃ := min {̄, } with ̄ = −rθ δ + 1 δ √ r2θ − αC2/2µ (1− 2αµ)k , (12) the following inequality holds E [ ‖θi+1 − θ∗i+1‖2 ] ≤ r2θ . (13) Proof. See Section E.2 in the appendix. The results derived in Theorem 4.12 show that the homotopy method used in combination with SGD allows to track in expectation an rθ-optimal solution across the parametric problems for “small enough” variations of the homotopy parameter, i.e. ∆λ ≤ ̃. Notice that rθ can potentially be smaller than B − δ and has to be bigger than the radius of the “noise-dominant” hypersphere centered at the minimizers, i.e. r2θ ≥ αC 2 2µ . In particular, by exploiting the local structure of the parametric problems we derive the maximum allowed variation of the homotopy parameter across the iterations of Algorithm 1. The derived upper bound is inversely proportional to the strong regularity constant δ and depends on the number of iterations k performed with SGD, such that the more iterations we perform on each parametric problem the more we are allowed to change the homotopy parameter. Finally, notice that these results can be applied recursively across the parametric problems. 5 TRANSFERRING OPTIMALITY VIA HOMOTOPY METHODS In this section we describe a possible application of homotopy methods to solve supervised regression and classification tasks. We address the case where deep neural networks are used as models. We start by introducing the problem framework of supervised learning and then we propose two different homotopy functions for the regression and classification scenarios, respectively. 5.1 PROBLEM FORMULATION Despite the generality of the proposed methodology, in this work we specifically address the supervised learning framework, and, in particular, when the predictive model is constituted by a deep neural network f(x; θ) parameterized by θ ∈ Rd. In the supervised learning scenario, independently from the type of task t, we typically dispose of a training set Dt consisting of N pairs of examples (xj , yj). The goal of the learning process is to find a value of θ that minimizes an objective function which measures the discrepancy between the outputs produced by the network ŷ = f(x; θ) and the target outputs y. In particular, the learning process consists in minimizing the following empirical objective function J(θ) := 1 N ∑ (xj ,yj)∈Dt `(yj , f(xj ; θ)) , (14) whose non-convexity originates from the high non-convexity of our model f . In the classical setting, J is chosen based on the KL divergence between the target data distribution Qx,y , with density qx,y = q(y|x)q(x), and the learned data distribution Px,y(θ), with density px,y = p(y|x; θ)q(x), where p(y|x; θ) is modeled via a neural network, (Goodfellow et al., 2016). With the appropriate approximations, this leads to the following form for the objective function J(θ) = 1 N ∑ (xj ,yj)∈Dt q(y|x) log q(y|x) p(y|x; θ) . (15) 5.2 HOMOTOPY FUNCTIONS ACROSS DATA DISTRIBUTIONS Finding a value of θ that attains a local minimum of the objective function in (14) is often a hard optimization task, given the high dimensionality and non-convexity of the problem. In addition, prior knowledge regarding the localization of the solutions is rarely available. The complexity of minimizing such functions also depends in some non-trivial way on the task distribution Qx,y that is addressed (e.g., Ionescu et al. (2016), Zendel et al. (2017)). For some tasks, convergence to a good approximate solution is achieved after a few epochs, while for other tasks, orders of magnitude more iterations are required to reach the neighborhood of a solution. In this perspective, different heuristics have been recently proposed in the attempt of re-using across different data distributions the prior knowledge gained from approximately solving the learning problem associated with a certain task. The question whether we could exploit easy-to-solve or already-solved tasks to speed up and improve the learning of unsolved hard tasks arises. The method we propose in this paper addresses this question and attempts to do so by using a rigorous and well-established mathematical framework, with the goal of speeding up the learning process in presence of hard-to-solve tasks. In the perspective of homotopy methods, this goal can be achieved under some assumptions by defining a homotopy transformation between starting and target tasks and by following the procedure described in Algorithm 1. Despite the flexibility and generality of the method, with this work we only focus on homotopy deformations across different task distributions, but similar transformations can be applied in numerous different manners that are also worth exploring, e.g., progressively modifying the architecture of the network or the weights of the objective function terms. Let s be the source task with training data Ds of pairs (xs, ys) ∼ Qxs,ys whose good approximate solution θ∗s for the minimization of the objective in (14) is available (or cheaply computable), and let t denote the target task with training data Dt of pairs (xt, yt) ∼ Qxt,yt whose conditional distribution we aim to learn. We propose two different homotopy deformations from task s to task t for regression and classification, respectively. 5.2.1 SUPERVISED REGRESSION In the supervised regression scenario, by modeling the density of the conditional learned distribution as p(y|x; θ) = N ( y; f(x, θ), σ2 I ) and using the approximate KL divergence objective function described in (15), we recover the mean squared error as minimization criterion. The proposed homotopy deformation is based on the following equations yλ|x = (1− λ) ys|x+ λ yt|x , (16) p(yλ|x) = N (yλ ; f(x; θ), σ2 I) . (17) Notice that the transformation described in (16) preserves the unimodality of the conditional distribution (see caption of Figures 4a and 4b in the appendix), and, when used in combination with the objective function defined in Equation (15), leads to the minimization w.r.t. θ of H(θ, λ) := E(x,yλ) ‖(1− λ) (ys − f(x; θ)) + λ (yt − f(x; θ)) ‖ 2 . (18) See Figure 6a in the appendix for a graphical representation of this homotopy deformation when applied to gradually transform a one-dimensional sine wave function with a frequency of 1 radian into a one-dimensional sine wave function with a frequency of 137 radians. A downside of this homotopy deformation is that the same support for x is required (the absence of the subscripts s and t on x stands to indicate that the same realization for xs and xt has to be considered). Alternatively, it is possible to approximate (16) by using a Gaussian filter (see Figure 6b and Section B in the appendix). 5.2.2 SUPERVISED CLASSIFICATION In the case of supervised classification, by modeling the density of the conditional learned distribution as p(y|x; θ) = Multinoulli(y; f(x; θ)), and using the approximate KL divergence objective function described in (15), we recover the cross-entropy loss function, (Goodfellow et al., 2016). A possible homotopy deformation for the classification case consists in applying the following transformations xλ = (1− λ)xs + λxt , (19) yλ|xλ = (1− λ) ys|xs + λ yt|xt , (20) which corresponds to the use of probabilistic labels. See Figure 8 in the appendix for a graphical representation of the proposed homotopy deformation. The corresponding label vector for the deformed image represented in Figure 8b is y0.5 = [0, 0, 0.5, 0, 0, 0.5, 0, 0, 0, 0], given that λ = 0.5 and that the sampled realizations of xs and xt, represented in Figures 8a and 8c, belong to class 2 and 5, respectively. 6 EXPERIMENTAL EVALUATION In this section, we present some experimental evaluations of homotopy methods when applied to solve supervised regression and classification tasks. As homotopy functions we adopt the ones discussed in Section 5.2. We empirically show that homotopy methods outperform random and warm-start initialization schemes in terms of numerical performance. In particular, when the target task is complex and/or, in the transfer-learning scenario, when the data distributions are significantly different, continuation methods can achieve significant speed-up compared to random and warm-start initializations. We believe that their superior numerical performance relies on the use of homotopy functions that progressively deform the data distribution from an easy-to-solve or alreadysolved task to the target data distribution. In addition, consistently across all the benchmarks, our homotopy-based method shows faster convergence than random-initialization and faster or comparable convergence than warm-start initialization. When the source task is “similar” to the target one, there is indeed no need to gradually vary the λ parameter in Algorithm 1, but it suffices to directly set it to 1. In this extreme case, our homotopy method boils down to warm-start initialization. 6.1 REGRESSION For the supervised regression scenario, the problem we address is how to transfer “optimality knowledge” across two tasks that involve regressing from the input to the output of two sine wave functions with different values of phase ω. Each considered dataset has 10000 samples split across training and testing, where x and y are defined as follows x ∼ U(0, 1) , y = sin(ωx) + ε , ε ∼ N (0, 0.01) . (21) The goal is to start with an “easy-to-learn” task, i.e. ω ≈ 1 rad, whose optimum is available by performing only few epochs with a first-order optimizer, e.g. SGD, Adam, and progressively transfer the “optimality knowledge” to a more complex task, i.e. ω >> 1 rad, by approximately solving the homotopy problems for increasing values of λ as described in Algorithm 1. We set ω = 1 rad for our source task distribution, and study the performance of the proposed approach with homotopy function as described in Equation (16) for different target distributions with ω >> 1 rad. See Figures 5a and 5b in the appendix for a visualization of the source data distribution with ω = 1 rad and the target data distribution when ω = 137 rad, respectively. The regressor is a feedforward neural network with 6 hidden layers of 100 units each and relu as activation function. In order to make the experiments more robust with respect to the choice of the step size α, we use Adam as optimizer. For the experiments in Figures 1a–1b, Figures 7a–7b in the appendix, and Figure 2a, we set α = 0.001, γ = 10, k = 200 and then performed an additional 500 epochs on the final target problem, while for the experiments in Figure 2b, we set γ = 10, k = 300 and performed an additional 600 epochs on the final target problem. In this last scenario we set α = 0.001 and then decrease it with a cosine annealing schedule to observe convergence to an optimum. As shown in Figures 1a–1b, Figures 7a–7b in the appendix, and Figures 2a and 2b, the homotopy method leads to faster convergence than the considered baselines by preserving the vicinity to an optimal solution for problems H(θ, λ) across the different λ values. In particular, we achieve a training loss up to two orders of magnitude better than the considered baselines. 6.2 CLASSIFICATION For the supervised classification scenario, we first apply the continuation method with the homotopy deformation described in Equations (19) and (20) in order to transfer optimality from the MNIST task, a notoriously “easy-to-learn” task for neural networks, to the FashionMNIST task. Since the two datasets have the same input dimensionality and the same number of classes, no additional preprocessing of the data is required. As network architecture, we use a VGG-type network, (Simonyan & Zisserman, 2015), and Adam as optimizer with a step size of α = 0.001. Secondly, we consider CIFAR-10 as target data distribution. Differently from the previous scenario, padding of the MNIST samples is required in order to apply Equation (19). The MNIST samples are also replicated across three channels. Also in this case we adopt a VGG-type network, (Simonyan & Zisserman, 2015), and Adam as optimizer with a step size of α = 0.0001. As shown in Figures 3a and 3b, in both benchmarks the homotopy method leads to faster convergence than random initialization. While in the second benchmark our method reaches a lower value of training loss in fewer epochs than warm-start, in the MNIST-to-FashionMNIST case the performance is comparable to using warm-start initialization. A possible interpretation is that, when the source and target task distributions are “too similar”, as we hypothesize in the MNIST- to-FashionMNIST scenario, then there is no need for homotopy deformations to be applied, i.e. 0 < λ < 1, but we can directly apply λ = 1 in our scheme, which corresponds to simply using warm-start initialization. 7 CONCLUSIONS In this paper we propose a new methodology based on homotopy methods in order to transfer knowledge across different task distributions. In particular, our homotopy-based method allows one to exploit easy-to-solve or already-solved learning problems to solve new and complex tasks, by approximately and sequentially solving a sequence of optimization problems where the task distribution is gradually deformed from the source to the target one. We conduct a theoretical analysis of a general homotopy method in a simplified setting, and then we test our method on some popular deep learning benchmarks, where it shows superior numerical performance compared to random and warm-start initialization schemes. The proposed framework, in its limiting case, corresponds to the widely used fine-tuning heuristic, allowing for a new and more rigorous interpretation of the latter. Finally, the generality of homotopy methods also opens many novel and promising research directions in fundamental fields for deep learning, such as stochastic non-convex optimization and transfer-learning. ACKNOWLEDGMENTS This work has partly been supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant no. 716721 as well as by the German Federal Ministry for Economic Affairs and Energy (BMWi) via DyConPV (0324166B), and by DFG via Research Unit FOR 2401. In addition, Q. Tran-Dinh has partly been supported by the National Science Foundation (NSF), grant. no. 1619884. The authors thank Stefan Falkner for his helpful suggestions and comments. A PROPERTIES OF HOMOTOPIC FUNCTIONS Among the numerous properties of homotopic functions, we recall the following ones Proposition A.1. Suppose that there exists a homotopy H : Z × [0, 1]→ Y from g to f , i.e. g ' f . Then • g ' g (reflexive property) • g ' f =⇒ f ' g (symmetric property) • g ' f and f ' h =⇒ g ' h (transitive property) Proof. See proof of Theorem 1.5 in (Suciu, 2016). Proposition A.2. Let g, g′ : Z → Y and f, f ′ : Y →W be continuous maps, and let f ◦ g, f ′ ◦ g′ : Z →W be the respective composite maps. If g ' g′ and f ' f ′, then f ◦ g ' f ′ ◦ g′. Proof. See proof of Proposition 1.7 in (Suciu, 2016). B APPROXIMATION VIA GAUSSIAN FILTER For the supervised regression scenario, we propose the following homotopy deformation yλ|x = λ ys|x+ (1− λ) yt|x . (22) A downside of this homotopy function is that the same support for x is required (the absence of the subscripts s and t on x stands to indicate that the same realization for xs and xt has to be considered). Alternatively, it is possible to approximate Equation (22) by using a Gaussian filter, as depicted in Figure 6b. In particular, having sampled one realization z of the pair (xs, ys) from the training set Ds, 0 < MGF ≤ N realizations of the pair (xt, yt) are sampled from Dt. Each yt,j realization is then weighted based on the vicinity of xt,j to the sampled xs,z realization. This leads to the following approximation of the z realization of yλ yλ,z = (1− λ) ys,z + λ MGF MGF∑ j=1 wj yt,j , (23) wj = 1√ 2πξ2 exp ( −||xs,z − xt,j || 2 2ξ2 ) , (24) where ξ > 0 is the standard deviation of the Gaussian filter. C ADDITIONAL FIGURES D LOCAL ERROR BOUNDS FOR SGD ITERATES Before proving local error bounds for SGD iterates in the considered framework, given the local nature of our assumptions, we need to demonstrate two important facts, on which the proof relies. In particular, we need to show: • local linear contraction of Gradient Descent (GD) iterates, and that • starting in a hypersphere of radius B around a minimizer and given a “big enough” batch size, the next SGD iterate is also contained in this region for all possible realizations of the gradient estimate. Considering problem (4) with fixed parameter λi, in the following subsections we will refer to θ∗ = θ∗i , θk = θi,k and gk = g(θk, λi), where we drop the subscript i and the explicit dependence on λi in order to simplify the notation. The analysis holds for all fixed parameters λi. D.1 LOCAL LINEAR CONTRACTION OF GD ITERATES Let us use GD to solve the following optimization problem θ∗ ∈ arg min θ H(θ, λi) , where the objective function H fulfills Assumptions 4.2 and 4.3. We now derive error bounds on the iterates of GD θk+1 = θk − α∇θH(θk, λi) , where θk ∈ BB,θ∗ and 0 < α ≤ 1L is the step size. We start by applying the definition of GD iterates and then we exploit the introduced assumptions ‖θk+1 − θ∗‖2 = ‖θk − α∇θH(θk, λi)− θ∗‖2 = ‖θk − θ∗‖2 − 2α∇θH(θk, λi)T (θk − θ∗) + α2‖∇θH(θk, λi)‖2 strong convexity ≤ (1− αµ)‖θk − θ∗‖2 − 2α(H(θk, λi)−H(θ∗, λi)) + α2‖∇θH(θk, λi)‖2 corollary 4.2.1 ≤ (1− αµ)‖θk − θ∗‖2 − 2α(1− αL)(H(θk, λi)−H(θ∗, λi)) . Since H(θk, λi)−H(θ∗, λi) ≥ 0 and −2α(1− αL) ≤ 0 when 0 < α ≤ 1L , we can safely drop the second term and obtain the final result ‖θk+1 − θ∗‖2 ≤ (1− αµ)‖θk − θ∗‖2. See also Theorem 2.3 in (Gower, 2018) for a derivation where Assumptions 4.2 and 4.3 are required to hold globally. D.2 REALIZATION OF THE SGD ITERATES IN THE STRONG CONVEXITY AND L-SMOOTHNESS REGION AROUND A MINIMIZER We address the following optimization problem θ∗ ∈ arg min θ 1 N N∑ j=1 `j(θ, λi)︸ ︷︷ ︸ :=H(θ,λi) , where H fulfills Assumptions 4.2– 4.4. As proved in Section D.1, under Assumptions 4.2 and 4.3, whenever θ0 ∈ BB,θ∗ and 0 < α ≤ 1L , deterministic gradient descent iterates converge linearly with contraction rate κd := √ (1− αµ). In particular, the following inequality holds ‖θDk+1 − θ∗‖ ≤ κd · ‖θk − θ∗‖ , for any θk such that ‖θk− θ∗‖ ≤ B, and superscript D denotes iterates obtained by applying the full gradient∇Hk := ∇H(θk, λi) θDk+1 = θk − α∇Hk . Let θk+1 denote the iterate obtained by applying one iteration of stochastic gradient descent θk+1 = θk − αgk , where gk := 1M ∑ j∈M∇`j(θk, λi) and M is a set of 0 < M ≤ N indexes randomly sampled from N = {1, . . . , N}. Given any realization of θk s.t. ‖θk − θ∗‖ ≤ B and any realization of gk, by exploiting Assumption 4.4 and the results derived in Section D.1, we have that ‖θk+1 − θ∗‖ = ‖θk − αgk − θ∗‖ = ‖θk − α∇Hk + α∇Hk − αgk − θ∗‖ ≤ ‖θk − α∇Hk − θ∗‖+ α‖∇Hk − gk‖ = ‖θk − α∇Hk − θ∗‖+ α ∥∥∥ 1 N ∑ j∈N\M ∇`j + 1 N ∑ j∈M ∇`j − 1 M ∑ j∈M ∇`j ∥∥∥ = ‖θk − α∇Hk − θ∗‖+ α ∥∥∥ 1 N ∑ j∈N\M ∇`j + M −N NM ∑ j∈M ∇`j ∥∥∥ ≤ ‖θk − α∇Hk − θ∗‖+ α 1 N ∑ j∈N\M ‖∇`j‖+ N −M NM ∑ j∈M ‖∇`j‖ ≤ ‖θDk+1 − θ∗‖+ 2α (N −M) N ν ≤ κd‖θk − θ∗‖+ 2α (N −M) N ν . (25) Since we have assumed that the current realization of θk lies in the hypersphere of radius B around the optimal solution θ∗, by solving for N−MN the following inequality κdB + 2α (N −M) N ν ≤ B , we obtain that, whenever (N−M)N ≤ (1−κd) 2αν B, the realization of θk+1 will also lie in this region. These derivations show that when the realization of the current iterate θk lies in the hypersphere of radius B around the minimizer θ∗, and (N−M)N ≤ (1−κd) 2αν B, then the next iterate θk+1 will also lie in this region. Consequently, in our scenario, if we assume that the initial point θ0 lies in the hypersphere of radius B around the minimizer θ∗, then, by applying the derivations recursively, we can show that the iterates will remain in this local region around the minimizer where strong convexity and smoothness hold. D.3 PROOF OF PROPOSITION 4.9 Let us use SGD to solve the following optimization problem θ∗ ∈ arg min θ H(θ, λi) , where the objective function H fulfills Assumptions 4.2– 4.4. We now derive error bounds for the iterates of SGD θk+1 = θk − αgk , where gk is the unbiased estimate of ∇Hk defined in the previous section and fulfills Assumption 4.5, θk ∈ BB,θ∗ , 0 < α ≤ min ( 1 2µ , 1 L ) is the step size and the batch size is set to a value M such that (N−M)N ≤ (1−κd) 2αν B. We start by applying the definition of SGD iterates ‖θk+1 − θ∗‖2 SGD iterate = ‖θk − αgk − θ∗‖2 = ‖θk − θ∗‖2 − 2αgTk (θk − θ∗) + α2‖gk‖2 . We now take the expectation w.r.t. θ0, g0, . . . , gk−1, gk and, considering Assumptions 4.2- 4.5, we obtain the following series of inequalities Eθ0,g0,...,gk−1,gk [ ‖θk+1 − θ∗‖2 ] = Eθ0,g0,...,gk−1,gk [ ‖θk − θ∗‖2 − 2αgTk (θk − θ∗) +α2‖gk‖2 ] law of iterated expectations = Eθ0,g0,...,gk−1 [ Egk [ ‖θk − θ∗‖2 −2αgTk (θk − θ∗) + α2‖gk‖2 | θ0, g0, . . . , gk−1 ]] unbiased gk+bounded “variance” ≤ Eθ0,g0,...,gk−1 [ ‖θk − θ∗‖2 −2α∇HTk (θk − θ∗) ] + α2C2 strong convexity ≤ (1− 2αµ) · Eθ0,g0,...,gk−1 [ ‖θk − θ∗‖2 ] + α2C2 . By applying this result recursively, we derive the following bound on the error for the SGD iterates Eθ0,g0,...,gk−1,gk [ ‖θk+1 − θ∗‖2 ] ≤ (1− 2αµ)k+1 · Eθ0 [ ‖θ0 − θ∗‖2 ] + αC2 2µ . See also Section 3 in (Schmidt, 2014) for a derivation where Assumptions 4.2 and 4.3 are required to hold globally. E MAIN THEORETICAL CONTRIBUTIONS E.1 PROOF OF PROPOSITION 4.11 Proposition E.1. Let θi ∈ BB,θ∗i and |λi − λi+1| ≤ , with 0 ≤ ≤ B δ . If ‖θi − θ ∗ i ‖ ≤ B − δ , then ‖θi − θ∗i+1‖ ≤ B. Moreover, let κd = √ (1− αµ) and assume that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , and ≤ 1 δ ( (1− κkd)B − (N −M) N 2αν (1− κd) ) . Then, after applying k iterations of SGD, we obtain that ‖θi+1 − θ∗i+1‖ ≤ B − δ . Proof. ‖θi − θ∗i+1‖ = ‖θi − θ∗i + θ∗i − θ∗i+1‖ Triangle Ineq. ≤ ‖θi − θ∗i ‖+ ‖θ∗i − θ∗i+1‖ Assumption 4.7 ≤ ‖θi − θ∗i ‖+ δ|λi − λi+1| . Finally, using the fact that |λi − λi+1| ≤ , it follows that, if ‖θi − θ∗i ‖ ≤ B − δ with 0 ≤ ≤ Bδ , then ‖θi − θ∗i+1‖ ≤ B. We now derive the conditions on such that ‖θi+1 − θ∗i+1‖ ≤ B − δ . By applying recursively the results derived in Section D.2 (25), we obtain that ‖θi+1 − θ∗i+1‖ ≤ κkd‖θi − θ∗i+1‖+ 2α (N −M) N ν k−1∑ i=0 κid . By using the limit of the geometric series, we have that ‖θi+1 − θ∗i+1‖ ≤ κkd‖θi − θ∗i+1‖+ (N −M) N 2αν (1− κd) . Finally, by considering that ‖θi − θ∗i+1‖ ≤ B and by solving in the following inequality κkdB + (N −M) N 2αν (1− κd) ≤ B − δ , we obtain the following upper bound on ≤ 1 δ ( (1− κkd)B − (N −M) N 2αν (1− κd) ) , from which also the extra condition on the batch size (N −M) N ≤ (1− κ k d)(1− κd)B 2αν . Figure 9: Graphical representation of the results derived in Proposition 4.11. The continuous and dashed lines are used to represent the circles of radius B and B − δ around the optimal solutions, respectively. E.2 PROOF OF THEOREM 4.12 Theorem E.2. Consider Algorithm 1 with Stochastic Gradient Descent as solver and let k > 0 be the number of iterations, 0 < α ≤ min ( 1 2µ , 1 L ) be the step size and 0 < M ≤ N be the batch size such that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , where κd = √ (1− αµ). For θ0 ∈ BB−δ ,θ∗0 and rθ ∈ R such that r2θ ≥ αC2 2µ , (26) then, if E [ ‖θi − θ∗i ‖2 ] ≤ r2θ and |λi − λi+1| ≤ ̃, where ̃ := min {̄, } with ̄ = −rθ δ + 1 δ √ r2θ − αC2/2µ (1− 2αµ)k , (27) the following inequality holds E [ ‖θi+1 − θ∗i+1‖2 ] ≤ r2θ . (28) Proof. E [ ‖θi+1 − θ∗i+1‖2 ] Ineq. 10 ≤ (1− 2αµ)kE [ ‖θi − θ∗i+1‖2 ] + αC2 2µ = (1− 2αµ)kE [ ‖θi − θ∗i + θ∗i − θ∗i+1‖2 ] + αC2 2µ Triangle Ineq. ≤ (1− 2αµ)kE [( ‖θi − θ∗i ‖+ ‖θ∗i − θ∗i+1‖ )2] + αC2 2µ = (1− 2αµ)kE [( ‖θi − θ∗i ‖2 + ‖θ∗i − θ∗i+1‖2 +2‖θi − θ∗i ‖‖θ∗i − θ∗i+1‖ )] + αC2 2µ Assumption 4.7 ≤ (1− 2αµ)kE [( ‖θi − θ∗i ‖2 + δ2|λi − λi+1|2 +2δ‖θi − θ∗i ‖|λi − λi+1|)] + αC2 2µ ≤ (1− 2αµ)k ( δ2̃2 + 2δrθ ̃+ r 2 θ ) + αC2 2µ . We now solve in ̃ the following second degree inequality (1− 2αµ)k ( δ2̃2 + 2δrθ ̃+ r 2 θ ) + αC2 2µ ≤ r2θ . (29) The inequality (29) admits solutions if and only if r2θ ≥ αC 2 2µ . In particular, inequality (29) holds ∀̃ ∈ [0, ̄], where ̄ = − rθδ + 1 δ √ r2θ−αC2/2µ (1−2αµ)k . F EXPERIMENTAL EVALUATION: TEST PERFORMANCES F.1 REGRESSION F.2 CLASSIFICATION
1. What is the main contribution of the paper regarding transfer learning? 2. What is the proposed approach for transfer learning, and how does it differ from traditional fine-tuning methods? 3. How does the paper motivate and introduce the new method, and what is the reviewer's opinion of this aspect? 4. What kind of tasks does the paper use to test its hypothesis, and what are the results? 5. Does the reviewer think the paper provides enough evidence to support its claims, or should more thorough evaluations be conducted? 6. Are there any suggestions for additional citations that could enhance the paper's context and relevance?
Review
Review Based on homotopy,, the paper describes a more rigorous approach to transfer learning than the so called ‘fine-tuning’ heuristic. Progress in the direction of more principled approaches for transfer learning would be tremendously impactful, since one of the core promises of deep learning is the learning of features, which can be used in different downstream tasks. Essentially, (if this reviewer understood this correctly) the idea behind this paper works by interpolation between the original task of interest and a potentially easier to optimize surrogate task. Overall, this reviewer found the concept simple and elegant, and well motivated, and also well introduced. However, since this reviewer does not have a formal background in mathematics, they cannot assess the soundness of the proofs. The paper tests the hypothesis by a simple function approximation regression task, and a classification task to learn to transfer from MNIST to fashion MNIST and MNIST to CIFAR, with promising results. One might argue that a more thorough evaluation would have been desirable, since the claims made by the paper are quite general, and it would have been in the authors’ best interest to present more thorough evidence that their concept works on wider scale of problems, ideally on an NLP task, given the current hype on pre-training with Transformer-based models. Previous work & citations: I would recommend to cite Schmidhuber 1987 (Evolutionary principles in self-referential learning) and Hochreiter et al 2001 (Learning to Learn with gradient descent) in the context of Meta learning. It would be nice to cite Klambauer et al (Self normalizing Networks) in the context of speeding up deep neural network training. The citations of the VGG paper is currently referenced by first names of the authors, not their last names, I am not sure if this was intended.
ICLR
Title Transferring Optimality Across Data Distributions via Homotopy Methods Abstract Homotopy methods, also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis. In this work, we propose a novel homotopy-based numerical method that can be used to gradually transfer optimized parameters of a neural network across different data distributions. This method generalizes the widely-used heuristic of pre-training parameters on one dataset and then fine-tuning them on another dataset of interest. We conduct a theoretical analysis showing that, under some assumptions, the homotopy method combined with Stochastic Gradient Descent (SGD) is guaranteed to converge in expectation to an rθ-optimal solution for a target task when started from an expected rθ-optimal solution on a source task. Empirical evaluations on a toy regression dataset and for transferring optimized parameters from MNIST to Fashion-MNIST and CIFAR-10 show substantial improvement of the numerical performance over random initialization and pre-training. 1 INTRODUCTION Homotopy methods (Allgower & Georg, 1980), also known as continuation methods, are a powerful mathematical tool to efficiently solve various problems in numerical analysis (e.g., Tran-Dinh et al. (2012), Zanelli et al. (2019)). The core idea consists in sequentially solving a series of parametric problems, starting from an easy-to-solve problem and progressively deforming it, via a homotopy function, to the target one. Homotopy methods are suitable to solve complex non-convex optimization problems where no or only little prior knowledge regarding the localization of the solutions is available. In addition, in contrast to state-of-the-art algorithms in deep learning (e.g., Bottou (2010), Duchi et al. (2011), Kingma & Ba (2015)), these methods often achieve global convergence guarantees by only exploiting local structures of the problem. Concepts, such as curriculum-learning and warm-starting, that are related to different degrees to homotopy methods, have been explored both in the deep learning (e.g., Gulcehre et al. (2016), Mobahi (2016), Gulcehre et al. (2017)) and in the reinforcement learning (e.g., Narvekar (2017)) communities. In this work, we propose a novel homotopy-based numerical method to transfer knowledge regarding the localization of a minimizer across different task distributions in deep learning. This method gradually tracks a neural network’s (close-to-)optimal parameters from one data distribution to another one via the homotopy method (Allgower & Georg, 1980) and can be interpreted as a generalization of the very common heuristic of fine-tuning a pre-trained network. After discussing related work (Section 2) and background on homotopy methods (Section 3), our contributions are as follows: 1. We provide a general theoretical analysis of the homotopy method when using SGD as an iterative solver, proving that under some local assumptions it tracks in expectation an rθ-optimal solution from the source task to the target task (Section 4). 2. We introduce homotopy functions for transferring optimality across data distributions for supervised regression and classification tasks (Section 5). 3. For a toy regression dataset and for transferring optimized parameters from MNIST to Fashion-MNIST and from MNIST to CIFAR-10, we show that our method obtains up to two orders of magnitude better numerical performance than random initialization and substantial improvement of the numerical performance over pre-training (Section 6). 2 RELATED WORK Deep neural networks have led to establish a new state-of-the-art in many applications. Despite their great success and the many theoretical studies that have been published in the last years (e.g., Balduzzi et al. (2017), Li et al. (2018), Feizi et al. (2018), Kunin et al. (2019)), training these deep models remains a big challenge. Various stochastic optimization algorithms (e.g., Duchi et al. (2011), Kingma & Ba (2015), Reddi et al. (2018)) and initialization heuristics (e.g., Daniely et al. (2016), Klambauer et al. (2017), Hanin & Rolnick (2018)) have been recently suggested in order to improve and speed up the training procedure. We now briefly discuss the state-of-the-art deep learning optimization techniques and initialization strategies that are most related with the proposed homotopy-based method, drawing connections with existing and ongoing research works in the field. Curriculum Learning. First introduced by Bengio et al. (2009) and then extended in different works (e.g., Graves et al. (2017), Weinshall et al. (2018), Hacohen & Weinshall (2019)), curriculum learning can also be listed among the optimization heuristics proposed to alleviate the complexity of solving high dimensional and non-convex problems. In particular, taking inspiration from the fact that humans and animals learn “better” when exposed to progressively more complex situations in an organized manner, curriculum learning techniques guide the training by starting with “easy-tolearn” samples and progressively introducing more “complex-to-learn” ones. This guided learning process can also be rephrased in a homotopy-like fashion (see Algorithm 1) as solving a sequence of optimization problems where the target training distribution gradually changes from considering only the “easy” examples to the full original training distribution. Meta-Learning and Transfer-Learning. Due to the massive amount of computational resources required by the development of modern deep learning applications, the community has started to explore the possibility of re-using learned parameters across different tasks, leading to the development of many new transfer-learning (e.g., Rohrbach et al. (2013), Wang & Schneider (2014), Cui et al. (2019)) and meta-learning (e.g., Schmidhuber (1987), Hochreiter et al. (2001), Finn et al. (2017), Zintgraf et al. (2019)) algorithms. The simplest way to transfer knowledge across different tasks consists in using warm-start initialization. This heuristic is amply used in computer vision applications, where it is also known as the fine-tuning technique (e.g., Krizhevsky et al. (2012), Yosinski et al. (2014), Reyes et al. (2015), Käding et al. (2016)). So far, there is no rigorous explanation of why and when fine-tuning works. However, numerous empirical evaluations on different benchmarks show that warm-starting the parameters of deep models often leads to faster convergence and better generalization than using random initialization. 3 BACKGROUND In this work, we will focus on solving problems of the form θ∗ ∈ arg min θ∈Rd 1 N N∑ j=1 `j(θ)︸ ︷︷ ︸ :=J(θ) , (1) where J : Rd → R is our target objective function and θ∗ is a minimizer. Problems as described in (1) arise, for instance, in classification and regression scenarios. In the following section we briefly review the main concepts of homotopy and continuation methods, which the proposed technique to solve problem (1) is based on. 3.1 HOMOTOPIC FUNCTIONS AND CONTINUATION METHODS FOR OPTIMIZATION Given two topological spaces Z and Y , a homotopy is a continuous deformation between two continuous functions g, f : Z → Y that fulfills certain properties. We can formalize this concept with the following definition Definition 3.1. Let g, f : Z → Y be continuous maps on the topological spaces Z, Y . A homotopy from g to f is a continuous function H : Z × [0, 1]→ Y such that H(z, 0) = g(z) , H(z, 1) = f(z) , ∀z ∈ Z . (2) If such function H exists, g is said to be homotopic of f , and this relation is denoted by g ' f . It is straightforward to show that, A ⊆ Rn being a convex set, any two continuous maps g, f : Z → A are homotopic (see (Suciu, 2016) for a derivation). From this fact it follows that any two continuous and real functions are homotopic. See Figures 4a– 4b in the appendix for a graphical representation of two different homotopy maps between the probability density functions of two Gaussian distributions, where λ ∈ [0, 1] denotes the homotopy parameter. See also Section A in the appendix for details on some of the main properties of homotopic functions. Continuation methods (also known as homotopy methods) are a widely used mathematical tool to solve complex non-convex optimization problems where no or only very limited prior knowledge regarding the localization of optimal solutions is available (see (Allgower & Georg, 1980) for a full characterization of continuation methods). The core idea of a homotopy approach consists in defining a homotopy function H(θ, λ) with λ ∈ [0, 1] such that H(θ, 0) = J0(θ) is a trivial to optimize smooth map (or a smooth map of which a surrogate θ0 of an optimal solution is available) and H(θ, 1) = J(θ) is our target objective function. Instead of directly addressing problem (1), we approximately and sequentially solve γ > 0 parametric optimization problems of the form θ∗i ∈ arg min θ∈Rd 1 N N∑ j=1 `j(θ, λi)︸ ︷︷ ︸ :=H(θ,λi) , (3) for increasing values of the parameter λi for i = 1, . . . , γ and warm-starting each problem with the previously derived approximate solution. Conceptually, Algorithm 1 describes the basic steps of a general homotopy algorithm. Under appropriate assumptions, if the increment ∆λ is sufficiently small, then the iterative procedure in Algorithm 1 will converge to a neighborhood of an optimal solution of the target objective J that depends in some sense on the number of iterations k > 0 performed (Allgower & Georg, 1980). Many different variations of Algorithm 1 exist. In particular, Algorithm 1 A Conceptual Homotopy Algorithm 1: θ0 ≈ θ∗0 ∈ arg minθH(θ, 0) 2: γ > 0 , γ ∈ Z 3: λ0 = 0, ∆λ = 1/γ 4: k > 0 , k ∈ Z 5: for i = 1, . . . , γ do 6: λi ← λi−1 + ∆λ 7: procedure θi ←ITERATIVESOLVER(θi−1, k,H(θ, λi)) 8: return θγ different update schemes for the homotopy parameter can be adopted (e.g., geometric or sublinear rate of increase), various iterative solvers can be used under distinct and specific assumptions, and, finally, also diverse levels of approximation for the solutions θ∗i can be considered, i.e. different k values. Before going into the details of two concrete formulations of the conceptual homotopy method outlined in Algorithm 1 (see Section 5) when applied to transfer optimality knowledge in regression and classification scenarios, we provide a general theoretical analysis in a simplified setting. 4 THEORETICAL ANALYSIS In this section, we provide a local theoretical analysis of homotopy methods when Stochastic Gradient Descent (SGD) (Bottou, 2010) is used as iterative solver in Algorithm 1. The locality of the analysis consists in the definition of hyperspheres of radius B ≥ 0 around the optimal solutions of each homotopy problem H(θ, λi) where it is possible to exploit certain structures of the problem. In particular, we approximately and sequentially solve γ > 0 unconstrained optimization problems of the form θ∗i ∈ arg min θ∈Rd H(θ, λi) , ∀i = 1, . . . , γ , (4) where H(θ, λi) fulfills the assumptions described in Section 4.1 and λi ∈ [0, 1]. Let θi be an approximate solution of the problem associated with parameter λi derived by applying k > 0 iterations of SGD (in the limit, k = 1) and also the starting point for the problem associated with parameter λi+1, ∀i = 1, . . . , γ − 1. In addition, let θ0 denote an approximate solution for the source task, i.e. λ0 = 0, that is used as initial point for the problem associated with λ1. In this section we characterize the maximum allowed variation of the homotopy parameter in order for the method to able to track in expectation an rθ-optimal solution from source to target task. 4.1 ASSUMPTIONS We now expose the fundamental assumptions for our general local theoretical analysis on which all the derivations in Sections 4.2 and 4.3 rely. In addition, throughout the analysis the `-functions in (3) are implicitly assumed to be differentiable in θ. We start by giving the definition of the regions around the optimal solutions of the homotopy problems where the analysis is conducted. Definition 4.1. Given θ∗i and B ≥ 0, let BB,θ∗i be the following set of vectors BB,θ∗i := {θ s.t. ‖θ − θ ∗ i ‖ ≤ B} , ∀i = 0, . . . , γ . Assumption 4.2 (local L-smoothness). Assume that there exists a constant L > 0 such that ‖∇θH(θ̃, λi)−∇θH(θ̂, λi)‖ ≤ L‖θ̃ − θ̂‖ , ∀θ̃, θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ . (5) Corollary 4.2.1. If H is locally L-smooth in θ, then the following inequality holds H(θ∗i , λi)−H(θ̂, λi) ≤ − 1 2L ‖∇θH(θ̂, λi)‖2 , ∀θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ. (6) Proof. See Lemma 1.1 in (Gower, 2018) for a proof. Assumption 4.3 (local µ-strong convexity). Assume that there exists µ > 0 such that H(θ̃, λi) ≥ H(θ̂, λi) +∇θH(θ̂, λi)T (θ̃− θ̂) + µ 2 ‖θ̃− θ̂‖2 , ∀θ̃, θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ . (7) Assumption 4.4 (bounded `-derivative). Assume that there exists ν > 0 such that ‖∇θ`j(θ̂, λi)‖ ≤ ν , ∀θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ, ∀j = 1, . . . , N. (8) Assumption 4.5 (local bounded “variance”). Let g(θ̂, λi) denote an unbiased estimate of the gradient ∇θH(θ̂, λi). Assume that there exists a constant C ≥ 0 such that the following bound on the expected squared norm of the estimate of the gradient holds E [ ‖g(θ̂, λi)‖2 ] ≤ C2 , ∀θ̂ ∈ BB,θ∗i , ∀i = 0, . . . , γ. (9) Remark 4.6. Assumption 4.5 is standard for proving error bounds on SGD iterates (see (Schmidt, 2014)). In addition, notice that, since E [ ‖g(θ̂, λi)‖2 ] = Var ( ‖g(θ̂, λi)‖ ) + E [ ‖g(θ̂, λi)‖ ]2 , the C constant is proportional to the variance and the squared expected value of the norm of the gradient estimate. Therefore, it decreases when the iterates approach a minimizer and by reducing the noise in the estimate of the gradient. In the limit (i.e. exact gradient and convergence to a minimizer), C = 0. Recall that θ∗(λi) ≡ θ∗i . Assumption 4.7 (strong regularity). Assume that there exists δ > 0 such that the following inequality holds ‖θ∗(λi+1)− θ∗(λi)‖ ≤ δ|λi+1 − λi| , ∀i = 0, . . . , γ − 1. Remark 4.8. Assumption 4.7 follows directly from the application of the Implicit Function Theorem by introducing some milder assumptions on the problem structure (see Lemma 2.1.8 in (Allgower & Georg, 1980)). 4.2 FUNDAMENTAL THEORETICAL PRELIMINARIES Before proceeding with the main theoretical contributions, we extend the existing results in the literature on global error bounds for the iterates of Stochastic Gradient Descent such that they can be applied when the underlying assumptions are only required to hold locally. The derived local error bounds for SGD iterates are used in Proposition 4.11 and Theorem 4.12. Proposition 4.9. Let θi ∈ BB,θ∗i be the starting point for the problem described in (3), and let θi := θi,0 and θi+1 := θi,k denote the iterate after k > 0 SGD steps, where an SGD step is defined as θi,k = θi,k−1 − αg(θi,k−1, λi) . Under Assumptions 4.2– 4.5 and by setting the batch size 0 < M ≤ N to a value such that (N−M) N ≤ (1−κd) 2αν B with κd = √ (1− αµ) and the learning rate α to a constant value such that 0 < α ≤ min ( 1 2µ , 1 L ) , the following error bound on the iterates holds E [ ‖θi+1 − θ∗i+1‖2 ] ≤ (1− 2αµ)k · E [ ‖θi − θ∗i+1‖2 ] + αC2 2µ . (10) Proof. See Section D in the appendix. Remark 4.10. The expectation in (10) is taken w.r.t. all the random variables, i.e. estimates of the gradients and initial point θ0, involved in the optimization procedure up to the current i+1 iteration of the algorithm. 4.3 MAIN THEORETICAL CONTRIBUTIONS Under the considered assumptions and by exploiting the previously derived results on local error bounds for SGD iterates, we show that, if the approximate solution θi for the problem with parameter λi is “sufficiently close” to a minimizer θ∗i in expectation, i.e. E [ ‖θi − θ∗i ‖2 ] ≤ r2θ , then, for a “sufficiently small” change in the homotopy parameter, the same vicinity to a minimizer θ∗i+1 is preserved in expectation for the approximate solution θi+1 of the problem with parameter λi+1, i.e. E [ ‖θi+1 − θ∗i+1‖2 ] ≤ r2θ . In particular, with Theorem 4.12 we characterize the maximum allowed variation of the homotopy parameter based on the properties of the parametric problems and the convergence characteristics of the adopted iterative solver, i.e. rate of convergence and number of iterations. First, in order to apply the results derived in Theorem 4.12, given a realization of θi ∈ BB,θ∗i , we have to derive the conditions on ‖θi − θ∗i ‖ such that ‖θi − θ∗i+1‖ ≤ B. In addition, we derive the necessary conditions in order to apply these results recursively across the iterations of Algorithm 1. Proposition 4.11. Let θi ∈ BB,θ∗i and |λi − λi+1| ≤ , with 0 ≤ ≤ B δ . If ‖θi − θ ∗ i ‖ ≤ B − δ , then ‖θi − θ∗i+1‖ ≤ B. Moreover, let κd = √ (1− αµ) and assume that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , and ≤ 1 δ ( (1− κkd)B − (N −M) N 2αν (1− κd) ) . Then, after applying k iterations of SGD, we obtain that ‖θi+1 − θ∗i+1‖ ≤ B − δ . Proof. See Section E.1 in the appendix. See Figure 9 in the appendix for a graphical representation of the results derived in Proposition 4.11, where the continuous and dashed lines are used to represent the circles of radius B and B − δ , respectively. Theorem 4.12. Consider Algorithm 1 with Stochastic Gradient Descent as solver and let k > 0 be the number of iterations, 0 < α ≤ min ( 1 2µ , 1 L ) be the step size and 0 < M ≤ N be the batch size such that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , where κd = √ (1− αµ). For θ0 ∈ BB−δ ,θ∗0 and rθ ∈ R such that r2θ ≥ αC2 2µ , (11) then, if E [ ‖θi − θ∗i ‖2 ] ≤ r2θ and |λi − λi+1| ≤ ̃, where ̃ := min {̄, } with ̄ = −rθ δ + 1 δ √ r2θ − αC2/2µ (1− 2αµ)k , (12) the following inequality holds E [ ‖θi+1 − θ∗i+1‖2 ] ≤ r2θ . (13) Proof. See Section E.2 in the appendix. The results derived in Theorem 4.12 show that the homotopy method used in combination with SGD allows to track in expectation an rθ-optimal solution across the parametric problems for “small enough” variations of the homotopy parameter, i.e. ∆λ ≤ ̃. Notice that rθ can potentially be smaller than B − δ and has to be bigger than the radius of the “noise-dominant” hypersphere centered at the minimizers, i.e. r2θ ≥ αC 2 2µ . In particular, by exploiting the local structure of the parametric problems we derive the maximum allowed variation of the homotopy parameter across the iterations of Algorithm 1. The derived upper bound is inversely proportional to the strong regularity constant δ and depends on the number of iterations k performed with SGD, such that the more iterations we perform on each parametric problem the more we are allowed to change the homotopy parameter. Finally, notice that these results can be applied recursively across the parametric problems. 5 TRANSFERRING OPTIMALITY VIA HOMOTOPY METHODS In this section we describe a possible application of homotopy methods to solve supervised regression and classification tasks. We address the case where deep neural networks are used as models. We start by introducing the problem framework of supervised learning and then we propose two different homotopy functions for the regression and classification scenarios, respectively. 5.1 PROBLEM FORMULATION Despite the generality of the proposed methodology, in this work we specifically address the supervised learning framework, and, in particular, when the predictive model is constituted by a deep neural network f(x; θ) parameterized by θ ∈ Rd. In the supervised learning scenario, independently from the type of task t, we typically dispose of a training set Dt consisting of N pairs of examples (xj , yj). The goal of the learning process is to find a value of θ that minimizes an objective function which measures the discrepancy between the outputs produced by the network ŷ = f(x; θ) and the target outputs y. In particular, the learning process consists in minimizing the following empirical objective function J(θ) := 1 N ∑ (xj ,yj)∈Dt `(yj , f(xj ; θ)) , (14) whose non-convexity originates from the high non-convexity of our model f . In the classical setting, J is chosen based on the KL divergence between the target data distribution Qx,y , with density qx,y = q(y|x)q(x), and the learned data distribution Px,y(θ), with density px,y = p(y|x; θ)q(x), where p(y|x; θ) is modeled via a neural network, (Goodfellow et al., 2016). With the appropriate approximations, this leads to the following form for the objective function J(θ) = 1 N ∑ (xj ,yj)∈Dt q(y|x) log q(y|x) p(y|x; θ) . (15) 5.2 HOMOTOPY FUNCTIONS ACROSS DATA DISTRIBUTIONS Finding a value of θ that attains a local minimum of the objective function in (14) is often a hard optimization task, given the high dimensionality and non-convexity of the problem. In addition, prior knowledge regarding the localization of the solutions is rarely available. The complexity of minimizing such functions also depends in some non-trivial way on the task distribution Qx,y that is addressed (e.g., Ionescu et al. (2016), Zendel et al. (2017)). For some tasks, convergence to a good approximate solution is achieved after a few epochs, while for other tasks, orders of magnitude more iterations are required to reach the neighborhood of a solution. In this perspective, different heuristics have been recently proposed in the attempt of re-using across different data distributions the prior knowledge gained from approximately solving the learning problem associated with a certain task. The question whether we could exploit easy-to-solve or already-solved tasks to speed up and improve the learning of unsolved hard tasks arises. The method we propose in this paper addresses this question and attempts to do so by using a rigorous and well-established mathematical framework, with the goal of speeding up the learning process in presence of hard-to-solve tasks. In the perspective of homotopy methods, this goal can be achieved under some assumptions by defining a homotopy transformation between starting and target tasks and by following the procedure described in Algorithm 1. Despite the flexibility and generality of the method, with this work we only focus on homotopy deformations across different task distributions, but similar transformations can be applied in numerous different manners that are also worth exploring, e.g., progressively modifying the architecture of the network or the weights of the objective function terms. Let s be the source task with training data Ds of pairs (xs, ys) ∼ Qxs,ys whose good approximate solution θ∗s for the minimization of the objective in (14) is available (or cheaply computable), and let t denote the target task with training data Dt of pairs (xt, yt) ∼ Qxt,yt whose conditional distribution we aim to learn. We propose two different homotopy deformations from task s to task t for regression and classification, respectively. 5.2.1 SUPERVISED REGRESSION In the supervised regression scenario, by modeling the density of the conditional learned distribution as p(y|x; θ) = N ( y; f(x, θ), σ2 I ) and using the approximate KL divergence objective function described in (15), we recover the mean squared error as minimization criterion. The proposed homotopy deformation is based on the following equations yλ|x = (1− λ) ys|x+ λ yt|x , (16) p(yλ|x) = N (yλ ; f(x; θ), σ2 I) . (17) Notice that the transformation described in (16) preserves the unimodality of the conditional distribution (see caption of Figures 4a and 4b in the appendix), and, when used in combination with the objective function defined in Equation (15), leads to the minimization w.r.t. θ of H(θ, λ) := E(x,yλ) ‖(1− λ) (ys − f(x; θ)) + λ (yt − f(x; θ)) ‖ 2 . (18) See Figure 6a in the appendix for a graphical representation of this homotopy deformation when applied to gradually transform a one-dimensional sine wave function with a frequency of 1 radian into a one-dimensional sine wave function with a frequency of 137 radians. A downside of this homotopy deformation is that the same support for x is required (the absence of the subscripts s and t on x stands to indicate that the same realization for xs and xt has to be considered). Alternatively, it is possible to approximate (16) by using a Gaussian filter (see Figure 6b and Section B in the appendix). 5.2.2 SUPERVISED CLASSIFICATION In the case of supervised classification, by modeling the density of the conditional learned distribution as p(y|x; θ) = Multinoulli(y; f(x; θ)), and using the approximate KL divergence objective function described in (15), we recover the cross-entropy loss function, (Goodfellow et al., 2016). A possible homotopy deformation for the classification case consists in applying the following transformations xλ = (1− λ)xs + λxt , (19) yλ|xλ = (1− λ) ys|xs + λ yt|xt , (20) which corresponds to the use of probabilistic labels. See Figure 8 in the appendix for a graphical representation of the proposed homotopy deformation. The corresponding label vector for the deformed image represented in Figure 8b is y0.5 = [0, 0, 0.5, 0, 0, 0.5, 0, 0, 0, 0], given that λ = 0.5 and that the sampled realizations of xs and xt, represented in Figures 8a and 8c, belong to class 2 and 5, respectively. 6 EXPERIMENTAL EVALUATION In this section, we present some experimental evaluations of homotopy methods when applied to solve supervised regression and classification tasks. As homotopy functions we adopt the ones discussed in Section 5.2. We empirically show that homotopy methods outperform random and warm-start initialization schemes in terms of numerical performance. In particular, when the target task is complex and/or, in the transfer-learning scenario, when the data distributions are significantly different, continuation methods can achieve significant speed-up compared to random and warm-start initializations. We believe that their superior numerical performance relies on the use of homotopy functions that progressively deform the data distribution from an easy-to-solve or alreadysolved task to the target data distribution. In addition, consistently across all the benchmarks, our homotopy-based method shows faster convergence than random-initialization and faster or comparable convergence than warm-start initialization. When the source task is “similar” to the target one, there is indeed no need to gradually vary the λ parameter in Algorithm 1, but it suffices to directly set it to 1. In this extreme case, our homotopy method boils down to warm-start initialization. 6.1 REGRESSION For the supervised regression scenario, the problem we address is how to transfer “optimality knowledge” across two tasks that involve regressing from the input to the output of two sine wave functions with different values of phase ω. Each considered dataset has 10000 samples split across training and testing, where x and y are defined as follows x ∼ U(0, 1) , y = sin(ωx) + ε , ε ∼ N (0, 0.01) . (21) The goal is to start with an “easy-to-learn” task, i.e. ω ≈ 1 rad, whose optimum is available by performing only few epochs with a first-order optimizer, e.g. SGD, Adam, and progressively transfer the “optimality knowledge” to a more complex task, i.e. ω >> 1 rad, by approximately solving the homotopy problems for increasing values of λ as described in Algorithm 1. We set ω = 1 rad for our source task distribution, and study the performance of the proposed approach with homotopy function as described in Equation (16) for different target distributions with ω >> 1 rad. See Figures 5a and 5b in the appendix for a visualization of the source data distribution with ω = 1 rad and the target data distribution when ω = 137 rad, respectively. The regressor is a feedforward neural network with 6 hidden layers of 100 units each and relu as activation function. In order to make the experiments more robust with respect to the choice of the step size α, we use Adam as optimizer. For the experiments in Figures 1a–1b, Figures 7a–7b in the appendix, and Figure 2a, we set α = 0.001, γ = 10, k = 200 and then performed an additional 500 epochs on the final target problem, while for the experiments in Figure 2b, we set γ = 10, k = 300 and performed an additional 600 epochs on the final target problem. In this last scenario we set α = 0.001 and then decrease it with a cosine annealing schedule to observe convergence to an optimum. As shown in Figures 1a–1b, Figures 7a–7b in the appendix, and Figures 2a and 2b, the homotopy method leads to faster convergence than the considered baselines by preserving the vicinity to an optimal solution for problems H(θ, λ) across the different λ values. In particular, we achieve a training loss up to two orders of magnitude better than the considered baselines. 6.2 CLASSIFICATION For the supervised classification scenario, we first apply the continuation method with the homotopy deformation described in Equations (19) and (20) in order to transfer optimality from the MNIST task, a notoriously “easy-to-learn” task for neural networks, to the FashionMNIST task. Since the two datasets have the same input dimensionality and the same number of classes, no additional preprocessing of the data is required. As network architecture, we use a VGG-type network, (Simonyan & Zisserman, 2015), and Adam as optimizer with a step size of α = 0.001. Secondly, we consider CIFAR-10 as target data distribution. Differently from the previous scenario, padding of the MNIST samples is required in order to apply Equation (19). The MNIST samples are also replicated across three channels. Also in this case we adopt a VGG-type network, (Simonyan & Zisserman, 2015), and Adam as optimizer with a step size of α = 0.0001. As shown in Figures 3a and 3b, in both benchmarks the homotopy method leads to faster convergence than random initialization. While in the second benchmark our method reaches a lower value of training loss in fewer epochs than warm-start, in the MNIST-to-FashionMNIST case the performance is comparable to using warm-start initialization. A possible interpretation is that, when the source and target task distributions are “too similar”, as we hypothesize in the MNIST- to-FashionMNIST scenario, then there is no need for homotopy deformations to be applied, i.e. 0 < λ < 1, but we can directly apply λ = 1 in our scheme, which corresponds to simply using warm-start initialization. 7 CONCLUSIONS In this paper we propose a new methodology based on homotopy methods in order to transfer knowledge across different task distributions. In particular, our homotopy-based method allows one to exploit easy-to-solve or already-solved learning problems to solve new and complex tasks, by approximately and sequentially solving a sequence of optimization problems where the task distribution is gradually deformed from the source to the target one. We conduct a theoretical analysis of a general homotopy method in a simplified setting, and then we test our method on some popular deep learning benchmarks, where it shows superior numerical performance compared to random and warm-start initialization schemes. The proposed framework, in its limiting case, corresponds to the widely used fine-tuning heuristic, allowing for a new and more rigorous interpretation of the latter. Finally, the generality of homotopy methods also opens many novel and promising research directions in fundamental fields for deep learning, such as stochastic non-convex optimization and transfer-learning. ACKNOWLEDGMENTS This work has partly been supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant no. 716721 as well as by the German Federal Ministry for Economic Affairs and Energy (BMWi) via DyConPV (0324166B), and by DFG via Research Unit FOR 2401. In addition, Q. Tran-Dinh has partly been supported by the National Science Foundation (NSF), grant. no. 1619884. The authors thank Stefan Falkner for his helpful suggestions and comments. A PROPERTIES OF HOMOTOPIC FUNCTIONS Among the numerous properties of homotopic functions, we recall the following ones Proposition A.1. Suppose that there exists a homotopy H : Z × [0, 1]→ Y from g to f , i.e. g ' f . Then • g ' g (reflexive property) • g ' f =⇒ f ' g (symmetric property) • g ' f and f ' h =⇒ g ' h (transitive property) Proof. See proof of Theorem 1.5 in (Suciu, 2016). Proposition A.2. Let g, g′ : Z → Y and f, f ′ : Y →W be continuous maps, and let f ◦ g, f ′ ◦ g′ : Z →W be the respective composite maps. If g ' g′ and f ' f ′, then f ◦ g ' f ′ ◦ g′. Proof. See proof of Proposition 1.7 in (Suciu, 2016). B APPROXIMATION VIA GAUSSIAN FILTER For the supervised regression scenario, we propose the following homotopy deformation yλ|x = λ ys|x+ (1− λ) yt|x . (22) A downside of this homotopy function is that the same support for x is required (the absence of the subscripts s and t on x stands to indicate that the same realization for xs and xt has to be considered). Alternatively, it is possible to approximate Equation (22) by using a Gaussian filter, as depicted in Figure 6b. In particular, having sampled one realization z of the pair (xs, ys) from the training set Ds, 0 < MGF ≤ N realizations of the pair (xt, yt) are sampled from Dt. Each yt,j realization is then weighted based on the vicinity of xt,j to the sampled xs,z realization. This leads to the following approximation of the z realization of yλ yλ,z = (1− λ) ys,z + λ MGF MGF∑ j=1 wj yt,j , (23) wj = 1√ 2πξ2 exp ( −||xs,z − xt,j || 2 2ξ2 ) , (24) where ξ > 0 is the standard deviation of the Gaussian filter. C ADDITIONAL FIGURES D LOCAL ERROR BOUNDS FOR SGD ITERATES Before proving local error bounds for SGD iterates in the considered framework, given the local nature of our assumptions, we need to demonstrate two important facts, on which the proof relies. In particular, we need to show: • local linear contraction of Gradient Descent (GD) iterates, and that • starting in a hypersphere of radius B around a minimizer and given a “big enough” batch size, the next SGD iterate is also contained in this region for all possible realizations of the gradient estimate. Considering problem (4) with fixed parameter λi, in the following subsections we will refer to θ∗ = θ∗i , θk = θi,k and gk = g(θk, λi), where we drop the subscript i and the explicit dependence on λi in order to simplify the notation. The analysis holds for all fixed parameters λi. D.1 LOCAL LINEAR CONTRACTION OF GD ITERATES Let us use GD to solve the following optimization problem θ∗ ∈ arg min θ H(θ, λi) , where the objective function H fulfills Assumptions 4.2 and 4.3. We now derive error bounds on the iterates of GD θk+1 = θk − α∇θH(θk, λi) , where θk ∈ BB,θ∗ and 0 < α ≤ 1L is the step size. We start by applying the definition of GD iterates and then we exploit the introduced assumptions ‖θk+1 − θ∗‖2 = ‖θk − α∇θH(θk, λi)− θ∗‖2 = ‖θk − θ∗‖2 − 2α∇θH(θk, λi)T (θk − θ∗) + α2‖∇θH(θk, λi)‖2 strong convexity ≤ (1− αµ)‖θk − θ∗‖2 − 2α(H(θk, λi)−H(θ∗, λi)) + α2‖∇θH(θk, λi)‖2 corollary 4.2.1 ≤ (1− αµ)‖θk − θ∗‖2 − 2α(1− αL)(H(θk, λi)−H(θ∗, λi)) . Since H(θk, λi)−H(θ∗, λi) ≥ 0 and −2α(1− αL) ≤ 0 when 0 < α ≤ 1L , we can safely drop the second term and obtain the final result ‖θk+1 − θ∗‖2 ≤ (1− αµ)‖θk − θ∗‖2. See also Theorem 2.3 in (Gower, 2018) for a derivation where Assumptions 4.2 and 4.3 are required to hold globally. D.2 REALIZATION OF THE SGD ITERATES IN THE STRONG CONVEXITY AND L-SMOOTHNESS REGION AROUND A MINIMIZER We address the following optimization problem θ∗ ∈ arg min θ 1 N N∑ j=1 `j(θ, λi)︸ ︷︷ ︸ :=H(θ,λi) , where H fulfills Assumptions 4.2– 4.4. As proved in Section D.1, under Assumptions 4.2 and 4.3, whenever θ0 ∈ BB,θ∗ and 0 < α ≤ 1L , deterministic gradient descent iterates converge linearly with contraction rate κd := √ (1− αµ). In particular, the following inequality holds ‖θDk+1 − θ∗‖ ≤ κd · ‖θk − θ∗‖ , for any θk such that ‖θk− θ∗‖ ≤ B, and superscript D denotes iterates obtained by applying the full gradient∇Hk := ∇H(θk, λi) θDk+1 = θk − α∇Hk . Let θk+1 denote the iterate obtained by applying one iteration of stochastic gradient descent θk+1 = θk − αgk , where gk := 1M ∑ j∈M∇`j(θk, λi) and M is a set of 0 < M ≤ N indexes randomly sampled from N = {1, . . . , N}. Given any realization of θk s.t. ‖θk − θ∗‖ ≤ B and any realization of gk, by exploiting Assumption 4.4 and the results derived in Section D.1, we have that ‖θk+1 − θ∗‖ = ‖θk − αgk − θ∗‖ = ‖θk − α∇Hk + α∇Hk − αgk − θ∗‖ ≤ ‖θk − α∇Hk − θ∗‖+ α‖∇Hk − gk‖ = ‖θk − α∇Hk − θ∗‖+ α ∥∥∥ 1 N ∑ j∈N\M ∇`j + 1 N ∑ j∈M ∇`j − 1 M ∑ j∈M ∇`j ∥∥∥ = ‖θk − α∇Hk − θ∗‖+ α ∥∥∥ 1 N ∑ j∈N\M ∇`j + M −N NM ∑ j∈M ∇`j ∥∥∥ ≤ ‖θk − α∇Hk − θ∗‖+ α 1 N ∑ j∈N\M ‖∇`j‖+ N −M NM ∑ j∈M ‖∇`j‖ ≤ ‖θDk+1 − θ∗‖+ 2α (N −M) N ν ≤ κd‖θk − θ∗‖+ 2α (N −M) N ν . (25) Since we have assumed that the current realization of θk lies in the hypersphere of radius B around the optimal solution θ∗, by solving for N−MN the following inequality κdB + 2α (N −M) N ν ≤ B , we obtain that, whenever (N−M)N ≤ (1−κd) 2αν B, the realization of θk+1 will also lie in this region. These derivations show that when the realization of the current iterate θk lies in the hypersphere of radius B around the minimizer θ∗, and (N−M)N ≤ (1−κd) 2αν B, then the next iterate θk+1 will also lie in this region. Consequently, in our scenario, if we assume that the initial point θ0 lies in the hypersphere of radius B around the minimizer θ∗, then, by applying the derivations recursively, we can show that the iterates will remain in this local region around the minimizer where strong convexity and smoothness hold. D.3 PROOF OF PROPOSITION 4.9 Let us use SGD to solve the following optimization problem θ∗ ∈ arg min θ H(θ, λi) , where the objective function H fulfills Assumptions 4.2– 4.4. We now derive error bounds for the iterates of SGD θk+1 = θk − αgk , where gk is the unbiased estimate of ∇Hk defined in the previous section and fulfills Assumption 4.5, θk ∈ BB,θ∗ , 0 < α ≤ min ( 1 2µ , 1 L ) is the step size and the batch size is set to a value M such that (N−M)N ≤ (1−κd) 2αν B. We start by applying the definition of SGD iterates ‖θk+1 − θ∗‖2 SGD iterate = ‖θk − αgk − θ∗‖2 = ‖θk − θ∗‖2 − 2αgTk (θk − θ∗) + α2‖gk‖2 . We now take the expectation w.r.t. θ0, g0, . . . , gk−1, gk and, considering Assumptions 4.2- 4.5, we obtain the following series of inequalities Eθ0,g0,...,gk−1,gk [ ‖θk+1 − θ∗‖2 ] = Eθ0,g0,...,gk−1,gk [ ‖θk − θ∗‖2 − 2αgTk (θk − θ∗) +α2‖gk‖2 ] law of iterated expectations = Eθ0,g0,...,gk−1 [ Egk [ ‖θk − θ∗‖2 −2αgTk (θk − θ∗) + α2‖gk‖2 | θ0, g0, . . . , gk−1 ]] unbiased gk+bounded “variance” ≤ Eθ0,g0,...,gk−1 [ ‖θk − θ∗‖2 −2α∇HTk (θk − θ∗) ] + α2C2 strong convexity ≤ (1− 2αµ) · Eθ0,g0,...,gk−1 [ ‖θk − θ∗‖2 ] + α2C2 . By applying this result recursively, we derive the following bound on the error for the SGD iterates Eθ0,g0,...,gk−1,gk [ ‖θk+1 − θ∗‖2 ] ≤ (1− 2αµ)k+1 · Eθ0 [ ‖θ0 − θ∗‖2 ] + αC2 2µ . See also Section 3 in (Schmidt, 2014) for a derivation where Assumptions 4.2 and 4.3 are required to hold globally. E MAIN THEORETICAL CONTRIBUTIONS E.1 PROOF OF PROPOSITION 4.11 Proposition E.1. Let θi ∈ BB,θ∗i and |λi − λi+1| ≤ , with 0 ≤ ≤ B δ . If ‖θi − θ ∗ i ‖ ≤ B − δ , then ‖θi − θ∗i+1‖ ≤ B. Moreover, let κd = √ (1− αµ) and assume that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , and ≤ 1 δ ( (1− κkd)B − (N −M) N 2αν (1− κd) ) . Then, after applying k iterations of SGD, we obtain that ‖θi+1 − θ∗i+1‖ ≤ B − δ . Proof. ‖θi − θ∗i+1‖ = ‖θi − θ∗i + θ∗i − θ∗i+1‖ Triangle Ineq. ≤ ‖θi − θ∗i ‖+ ‖θ∗i − θ∗i+1‖ Assumption 4.7 ≤ ‖θi − θ∗i ‖+ δ|λi − λi+1| . Finally, using the fact that |λi − λi+1| ≤ , it follows that, if ‖θi − θ∗i ‖ ≤ B − δ with 0 ≤ ≤ Bδ , then ‖θi − θ∗i+1‖ ≤ B. We now derive the conditions on such that ‖θi+1 − θ∗i+1‖ ≤ B − δ . By applying recursively the results derived in Section D.2 (25), we obtain that ‖θi+1 − θ∗i+1‖ ≤ κkd‖θi − θ∗i+1‖+ 2α (N −M) N ν k−1∑ i=0 κid . By using the limit of the geometric series, we have that ‖θi+1 − θ∗i+1‖ ≤ κkd‖θi − θ∗i+1‖+ (N −M) N 2αν (1− κd) . Finally, by considering that ‖θi − θ∗i+1‖ ≤ B and by solving in the following inequality κkdB + (N −M) N 2αν (1− κd) ≤ B − δ , we obtain the following upper bound on ≤ 1 δ ( (1− κkd)B − (N −M) N 2αν (1− κd) ) , from which also the extra condition on the batch size (N −M) N ≤ (1− κ k d)(1− κd)B 2αν . Figure 9: Graphical representation of the results derived in Proposition 4.11. The continuous and dashed lines are used to represent the circles of radius B and B − δ around the optimal solutions, respectively. E.2 PROOF OF THEOREM 4.12 Theorem E.2. Consider Algorithm 1 with Stochastic Gradient Descent as solver and let k > 0 be the number of iterations, 0 < α ≤ min ( 1 2µ , 1 L ) be the step size and 0 < M ≤ N be the batch size such that (N −M) N ≤ (1− κ k d)(1− κd)B 2αν , where κd = √ (1− αµ). For θ0 ∈ BB−δ ,θ∗0 and rθ ∈ R such that r2θ ≥ αC2 2µ , (26) then, if E [ ‖θi − θ∗i ‖2 ] ≤ r2θ and |λi − λi+1| ≤ ̃, where ̃ := min {̄, } with ̄ = −rθ δ + 1 δ √ r2θ − αC2/2µ (1− 2αµ)k , (27) the following inequality holds E [ ‖θi+1 − θ∗i+1‖2 ] ≤ r2θ . (28) Proof. E [ ‖θi+1 − θ∗i+1‖2 ] Ineq. 10 ≤ (1− 2αµ)kE [ ‖θi − θ∗i+1‖2 ] + αC2 2µ = (1− 2αµ)kE [ ‖θi − θ∗i + θ∗i − θ∗i+1‖2 ] + αC2 2µ Triangle Ineq. ≤ (1− 2αµ)kE [( ‖θi − θ∗i ‖+ ‖θ∗i − θ∗i+1‖ )2] + αC2 2µ = (1− 2αµ)kE [( ‖θi − θ∗i ‖2 + ‖θ∗i − θ∗i+1‖2 +2‖θi − θ∗i ‖‖θ∗i − θ∗i+1‖ )] + αC2 2µ Assumption 4.7 ≤ (1− 2αµ)kE [( ‖θi − θ∗i ‖2 + δ2|λi − λi+1|2 +2δ‖θi − θ∗i ‖|λi − λi+1|)] + αC2 2µ ≤ (1− 2αµ)k ( δ2̃2 + 2δrθ ̃+ r 2 θ ) + αC2 2µ . We now solve in ̃ the following second degree inequality (1− 2αµ)k ( δ2̃2 + 2δrθ ̃+ r 2 θ ) + αC2 2µ ≤ r2θ . (29) The inequality (29) admits solutions if and only if r2θ ≥ αC 2 2µ . In particular, inequality (29) holds ∀̃ ∈ [0, ̄], where ̄ = − rθδ + 1 δ √ r2θ−αC2/2µ (1−2αµ)k . F EXPERIMENTAL EVALUATION: TEST PERFORMANCES F.1 REGRESSION F.2 CLASSIFICATION
1. What is the focus and contribution of the paper on transfer learning? 2. What are the strengths of the proposed approach, particularly in terms of theoretical guarantees? 3. Do you have any concerns or suggestions regarding the experimental evaluation? 4. How does the reviewer assess the clarity, novelty, and overall quality of the paper's content? 5. Are there any specific points or areas where the reviewer has questions or suggests improvements?
Review
Review Contribution This paper proposes algorithm for transferring knowledge from easy -to-solved to complex tasks or from already solve to new tasks. It relies on homotopy functions and sequentially solves a sequence of optimization problems where the task distribution is gradually deformed from a source task to the target task. Theoretical guarantees are provided and proven in a strongly convex setting. The main results from the theory show that the distance between the final solution and its optimal are less or equal to relative to the distance of the initial source solution to its optimum. So a near optimal solution for the source task will lead to near optimal solution for the target task. Regression and Classification experimentations show competitive results compared to random and warm-start initialization schemes. Clarity Overall, the paper is well written, well motivated and well structured. The technical content is also very clear and excellent. Minor point: Seems that there is a notation error in proposition G.1 and its proof (i instead of i+1). Novelty The novelty in this work seems to be the application of homotopy methods to the transfer learning settings. The mathematical guarantees are also new and may even offer new ways to interpret fine tuning methods that have been so successful in recent literature. However, given the non-convexity of DNNs, it seems like the analysis in the non-convex settings and its implications should be part of the main text. Experiments: Overall, the experiments are very insightful but limited since you only show the training loss and the validation performance is not evaluated at all. Other things that would could be beneficial in better assessing the quality of your method are: comparison to Curriculum learning methods, more in depth analysis of the impact of k, and gamma in both regression and classification settings, and solving toy convex optimization problems to bridge the gap between theory and application. Preliminary rating: * Accept *
ICLR
Title Untangling Effect and Side Effect: Consistent Causal Inference in Non-Targeted Trials Abstract A treatment is usually appropriate for some group (the “sick” group) on whom it has an effect, but it can also have a side-effect when given to subjects from another group (the “healthy” group). In a non-targeted trial both sick and healthy subjects may be treated, producing heterogeneous effects within the treated group. Inferring the correct treatment effect on the sick population is then difficult, because the effect and side-effect are tangled. We propose an efficient nonparametric approach to untangling the effect and side-effect, called PCM (pre-cluster and merge). We prove its asymptotic consistency in a general setting and show, on synthetic data, more than a 10x improvement in accuracy over existing state-of-the-art. N/A A treatment is usually appropriate for some group (the “sick” group) on whom it has an effect, but it can also have a side-effect when given to subjects from another group (the “healthy” group). In a non-targeted trial both sick and healthy subjects may be treated, producing heterogeneous effects within the treated group. Inferring the correct treatment effect on the sick population is then difficult, because the effect and side-effect are tangled. We propose an efficient nonparametric approach to untangling the effect and side-effect, called PCM (pre-cluster and merge). We prove its asymptotic consistency in a general setting and show, on synthetic data, more than a 10x improvement in accuracy over existing state-of-the-art. 1 INTRODUCTION A standard approach to causal effect estimation is the targeted randomized controlled trial (RCT), see (8; 13; 15; 17; 23). To test a treatment’s effect on a sick population, subjects are recruited and admitted into the trial based on eligibility criteria designed to identify sick subjects. The trial subjects are then randomly split into a treated group that receives the treatment and a control group that receives the best alternative treatment (or a placebo). “Targeted” means only sick individuals are admitted into the trial via the eligibility criteria, with the implicit assumption that only a single treatment-effect is to be estimated. This ignores the possibility of treated subgroups among the sick population with heterogeneous effects. Further, one often does not have the luxury of a targeted RCT. For example, eligibility criteria for admittance to the trial may not unambiguously identify sick subjects, or one may not be able to control who gets into the trial. When the treatment is not exclusively applied on sick subjects, we say the trial is non-targeted and new methods are needed to extract the treatment effect on the sick, (25). Non-targeted trials are the norm whenever subjects self-select into an intervention, which is often the case across domains stretching from healthcare to advertising. We propose a nonparametric approach to causal inference in non-targeted trials, based on a pre-cluster and merge strategy. Assume a population is broken into ℓ groups with different expected treatment effects in each group. Identify each group with the level of its treatment effect, so there are effect levels c = 0, 1, . . . , ℓ−1. For example, a population’s subjects can be healthy, c = 0, or sick, c = 1. We use the RubinNeyman potential outcome framework, (19). A subject is a tuple s = (x, c, t, y) sampled from a distribution D, where x ∈ [0, 1]d is a feature-vector such as [age, weight], c indicates the subject’s level, t indicates the subjects treatment cohort, and y is the observed outcome. The observed outcome is one of two potential outcomes, v if treated or v̄ if not treated. We consider strongly ignorable trials: given x, the propensity to treat is strictly between 0 and 1 and the potential outcomes {v, v̄} depend only on x, independent of t. In a strongly ignorable trial, one can use the features to identify counterfactual controls for estimating effect. The level c is central to the scope of our work. Mathematically, c is a hidden effect modifier which determines the distribution of the potential outcomes (c is an unknown and possibly complex function of x). The level c dichotomizes the feature space into subpopulations with different effects. One tries to design the eligibility criteria for the trial to ensure that the propensity to treat is non-zero only for subjects in one level. What to do when the eligibility criteria allow more than one level into the trial is exactly the problem we address. Though our work applies to a general number of levels, all the main ideas can be illustrated with just two levels, c ∈ {0, 1}. For the sake of concreteness, we denote these two levels healthy and sick. A trial samples n subjects, s1, . . . , sn. If subject i is treated, ti = 1 and the observed outcome yi = vi, otherwise ti = 0, and the observed outcome is v̄i (consistency). The treated group is T = {i | ti = 1}, the control group is C = {i | ti = 0}, and the sick group is S = {i | ci = 1}. Our task is to determine if the treatment works on the sick, and if there is any side-effect on the healthy. We wish to estimate the effect and side-effect, defined as EFF = ED[v − v̄ | c = 1] (1) SIDE-EFF = ED[v − v̄ | c = 0]. Most prior work estimates EFF using the average treatment effect for the treated, the ATT (1), ATT = averagei∈T (vi)− averagei∈T (v̄i), (2) which assumes all treated subjects are sick. There are several complications with this approach. (i) Suppose a subject is treated with probability p(x, c), the propensity to treat. For a non-uniform propensity to treat, the treated group has a selection bias, and ATT is a biased estimate of EFF. Ways to address this bias include inverse propensity weighting, (18), matched controls, (1), and learning the outcome function y(x, t), see for example (2; 3; 10; 12; 22; 23). Alternatively, one can simply ignore this bias and accept that ATT is estimating E[v − v̄ | t = 1]. (ii) The second term on the RHS in (2) can’t be computed because we don’t know the counterfactual v̄ for treated subjects. Much of causal inference deals with accurate unbiased estimation of averagei∈T (v̄i), (4; 9). Our goal is not to improve counterfactual estimation. Hence, in our experiments, we use off-the-shelf counterfactual estimators. (iii) (Focus of our work) The trial is non-targeted and some (often most) treated subjects are healthy. To highlight the challenge in (iii) above, consider a simple case with uniform propensity to treat, p(x, c) = p. Conditioning on at least one treated subject, E[ATT] = P[sick]× EFF + P[healthy]× SIDE-EFF. The ATT is a mix of effect and side effect and is therefore biased when the treatment effect is heterogeneous across levels. In many settings, for example healthcare, P[sick] ≪ P[healthy] and the bias is extreme, rendering ATT useless. Increasing the number of subjects won’t resolve this bias. State-of-the-art causal inference packages provide methods to compute ATT, specifically aimed at accurate estimates of the counterfactual averagei∈T (v̄i), (5; 21). These packages suffer from the mixing bias above. We propose a fix which can be used as an add-on to these packages. Our Contribution. Our main result is an asymptotically consistent distribution independent algorithm to extract the correct effect levels and associated subpopulations in non-targeted trials, when the number of effect-levels is unknown. Our main result is Theorem 1. Assume a non-targeted trial has a treated group with n subjects sampled from an unknown distribution D. There is an algorithm which identifies ℓ̂ effect-levels with estimated expected effect µ̂c in level c, and assigns each subject si to a level ĉi which, under mild technical conditions, satisfies: Theorem 1. All of the following hold with probability 1− o(1): (1) ℓ̂ = ℓ, i.e., the correct number of effect levels ℓ is identified. (2) µ̂c = E[v − v̄ | c] + o(1), i.e., the effect at each level is estimated accurately. (3) The fraction of subjects assigned the correct effect level is 1 − o(1). The effect level ĉi is correct if µĉi matches, to within o(1), the expected treatment effect for the subject. For the formal assumptions, see Section 3. Parts (1) and (2) say the algorithm extracts the correct number of levels and their expected effects. Part (3) says the correct subpopulations for each level are extracted. Knowing the correct subpopulations is useful for post processing, for example to understand the effects in terms of the features. Our algorithm satisfying Theorem 1 is given in Section 2. The algorithm uses an unsupervised pre-cluster and merge strategy which reduces the task of estimating the effect-levels to a 1-dimensional optimal clustering problem that provably extracts the correct levels asymptotically as n → ∞. Our algorithm assumes an unbiased estimator of counterfactuals, for example some established method (5; 21). In practice, this means one can control for confounders. If unbiased counterfactual estimation is not possible, then any form of causal effect analysis is doomed. Our primary goal is untangling the heterogeneous effect levels, hence we use an off-the-shelf gradient boosting algorithm to get counterfactuals in our experiments (5). We demonstrate that our algorithm’s performance on synthetic data matches the theory. Subpopulation effect-analysis is a special case of heterogeneous treatment effects (HTE), (12; 20; 23). Hence, we also compare with X-Learner, a state-of-the art algorithm for HTE (12) and Bayes optimal prediction of effect-level. In comparison to X-Learner, our algorithm extracts visually better subpopulations, and has an accuracy that is more than 10× better for estimating per-subject expected effects. Note, HTE algorithms do not extract subpopulations with effect-levels. They predict effect given the features x. One can, however, try to infer subpopulations from predicted effects. Our algorithm also significantly outperforms Bayes optimal based on individual effects, which suggests that some form of pre-cluster and merge strategy is necessary. This need for some form of clustering has been independently observed in (11, chapter 4) who studies a variety of clustering approaches in a non-distribution independent setting with a known number of levels. 2 ALGORITHM: PRE-CLUSTER AND MERGE FOR SUBPOPULATION EFFECTS (PCM) Our algorithm uses a nonparametric pre-cluster and merge strategy that achieves asymptotic consistency without any user-specified hyperparameters. The inputs are the n subjects s1, . . . , sn, where {si}ni=1 = {(xi, ti, yi, ȳi)}ni=1. Note, both the factual yi and counterfactual ȳi are inputs to the algorithm. To use the algorithm in practice, of course, the counterfactual must be estimated, and for our demonstrations we use an out-of-the-box gradient boosting regression algorithm from (7; 16) to estimate counterfactuals. Inaccuracy in counterfactual estimation will be accommodated in our analysis. The need to estimate counterfactuals does impact the algorithm in practice, due to an asymmetry in most trials: the treated population is much smaller than the controls. Hence, one might be able to estimate counterfactuals for the treated population but not for the controls due to lack of coverage by the (small) treated population. In this case, our algorithm is only run on the treated population. It is convenient to define individual treatment effects ITEi = (yi − ȳi)(2ti − 1), where yi is the observed factual and ȳi the counterfactual (2ti − 1 = ±1 ensuring that the effect computed is for treatment versus no treatment). There are five main steps. 1: [PRE-CLUSTER] Cluster the xi into K ∈ O( √ n) clusters Z1, . . . , ZK . 2: Compute ATT for each cluster Zj , ATTj = averagexi∈Zj ITEi. 3: [MERGE] Group the {ATTj}Kj=1 into ℓ̂ effect-levels, merging the clusters at each level to get subpopulations X0, X1, . . . , Xℓ̂−1. (Xc is the union of all clusters at level c.) 4: Compute subpopulation effects µ̂c = averagexi∈Xc ITEi, for c = 0, . . . , ℓ̂− 1. 5: Assign subjects to effect levels, update the populations Xc and expected effects µ̂c. We now elaborate on the intuition and details for each step in the algorithm. Step 1. The clusters in the pre-clustering step play two roles. The first is to denoise individual effects using in-cluster averaging. The second is to group like with like, that is clusters should be homogeneous, containing only subjects from one effect-level. This means each cluster-ATT will accurately estimate a single level’s effect (we do not know which). We allow for any clustering algorithm. However, our theoretical analysis (for simplicity) uses a specific algorithm, boxclustering, based on an ε-net of the feature space. One could also use a standard clustering algorithm such as K-means. We compare box-clustering with K-means in the appendix. Step 2. Denoising of the individual effects using in-cluster averaging. Assuming clusters are homogeneous, each cluster ATT will approximate some level’s effect. Step 3. Assuming the effects in different levels are well separated, this separation gets emphasized in the cluster-ATTs, provided clusters are homogeneous. Hence, we can identify effect-levels from the clusters with similar effects, and merge those clusters into subpopulations. Two tasks must be solved. Finding the number of subpopulations ℓ̂ and then optimally grouping the clusters into ℓ̂ subpopulations. To find the subpopulations, we use ℓ̂-means with squared 1-dim clustering error. Our algorithm sets ℓ̂ to achieve an ℓ̂-means error at most log n/n1/2d. So, optimal 1-dim clustering error(ℓ̂− 1) > log n/n1/2d optimal 1-dim clustering error(ℓ̂) ≤ log n/n1/2d Simultaneously finding ℓ̂ and optimally partitioning the clusters into ℓ̂ groups can be solved using a standard dynamic programming algorithm in O(K2ℓ̂) time using O(K) space (24). Note, our algorithm will identify the number of effect levels provided such distinct subpopulations exist in the data. If it is known that only two subpopulations exist, sick and healthy, then ℓ̂ can be hard-coded to 2. Step 4. Assuming each cluster is homogeneous and clusters with similar effects found in step 3 are from the same effect-level, the subpopulations formed by merging the clusters with similar effects will be nearly homogeneous. Hence, the subpopulation-ATTs will be accurate estimates of the effects at each level. Step 5. Each subject xi is implicitly assigned a level ĉi based on the subpopulation Xc to which it belongs. However, we can do better. By considering the √ n nearest neighbors to xi, we can obtain a smoothed effect for xi. We use this smoothed effect to place xi into the subpopulation whose effect matches best, hence placing xi into a level. Unfortunately, running this algorithm for all n subjects is costly, needing sophisticated data structures to reduce the expected run time below O(n2). As an alternative, we center an (1/n1/2d)-hypercube on xi and smooth xi’s effect using the average effect over points in this hypercube. This approach requires O(n √ n) run time to obtain the effect-level for all subjects, significantly better than O(n2) when n is large. Once the effect-levels for all subjects are obtained, one can update the subpopulations Xc and the corresponding effect-estimates µ̂c. The run time of the algorithm is O(nℓ+ n √ n) (expected and with high probability) and the output is nearly homogeneous subpopulations which can now be post-processed. An example of useful post-processing is a feature-based explanation of the subpopulation-memberships. Note that we still do not know which subpopulation(s) are the sick ones, hence we cannot say which is the effect and which is the side effect. A post-processing oracle would make this determination. For example, a doctor in a medical trial would identify the sick groups from subpopulation-demographics. Note. The optimal 1-d clustering can be done directly on the smoothed ITEs from the (1/n1/2d)hypercubes centered on each xi, using the same thresholds in step 3. One still gets asymptotic consistency, however the price is an increased run time to O(n2ℓ). This is prohibitive for large n. 3 ASYMPTOTIC CONSISTENCY: PROOF OF THEOREM 1 To prove consistency, we must make our assumptions precise. In some cases the assumptions are stronger than needed, for simplicity of exposition. A1. The feature space X is [0, 1]d and the marginal feature-distribution is uniform, D(x) = 1. More generally, X is compact and D(x) is bounded, 0 < δ ≤ D(x) ≤ ∆ (can be relaxed). A2. The level c is an unknown function of the feature x, c = h(x). Potential effects depend only on c. Conditioning on c, effects are well separated. Let µc = ED[v − v̄|c]. Then, |µc − µc′ | ≥ κ for c ̸= c′ A3. Define the subpopulation for level c as Xc = h−1(c). Each subpopulation has positive measure, P[x ∈ Xc] = βc ≥ β > 0. A4. For a treated subject xi with outcome yi, it is possible to produce an unbiased estimate of the counterfactual outcome ȳi. Effectively, we are assuming an unbiased estimate of the individual treatment effect ITEi = yi − ȳi is available. Any causality analysis requires some estimate of counterfactuals and, in practice, one typically gets counterfactuals from the untreated subjects after controlling for confounders (5; 21). A5. Sample averages concentrate. Essentially, the estimated ITEs are independent. This is true in practice because the subjects are independent and the counterfactual estimates use a predictor learned from the independent control population. For m i.i.d. subjects, let the average of the estimated ITEs be ν̂ and the expectation of this average be ν. Then, P[|ν̂ − ν| > ϵ] ≤ e−γmϵ 2 . The parameter γ > 0 is related to distributional properties of the estimated ITEs. Higher variance ITE estimates result in γ being smaller. Concentration is a mild technical assumption requiring the estimated effects to be unbiased well behaved random variables, to which a central limit theorem applies. Bounded effects or normally distributed effects suffice for concentration. A6. The boundary between the subpopulations has small measure. Essentially we require that two subjects that have very similar features will belong to the same level with high probability (the function c = h(x) is not a “random” function). Again, this is a mild technical assumption which is taken for granted in practice. Let us make the assumption more precise. Define an ε-net to be a subdivision of X into (1/ε)d disjoint hypercubes of side ε. A hypercube of an ε-net is impure if it contains points from multiple subpopulations. Let Nimpure be the number of impure hypercubes in an ε-net. Then εdNimpure ≤ αερ, where ρ > 0 and α is a constant. Note, d− ρ is the boxing-dimension of the boundary. In most problems, ρ = 1. A7. We use box-clustering for the first step in the algorithm. Given n, define ε(n) = 1/ ⌊ n1/2d ⌋ . All points in a hypercube of an ε(n)-net form a cluster. Note that the number of clusters is approximately √ n. The expected number of points in a cluster is nε(n)d ≈ √ n. We prove Theorem 1 via a sequence of lemmas. The feature space X = [0, 1]d is partitioned into levels X0, . . . , Xℓ−1, where Xc = h−1(c) is the set of points whose level is c. Define an ε-net that partitions X into Nε = ε−d hypercubes of equal volume εd, where ε is the side-length of the hypercube. Set ε = 1/ ⌊ n1/2d ⌋ . Then, Nε = √ n(1 − O(d/n1/2d)) ∼ √ n. Each hypercube in the ε-net defines a cluster for the pre-clustering stage. There are about √ n clusters and, since D(x) is uniform, there are about √ n points in each cluster. Index the clusters in the ε-net by j ∈ {1, . . . , Nε} and define nj as the number of points in cluster j. Formally, we have, Lemma 1. Suppose D(x) ≥ δ > 0. Then, P[minj nj ≥ 12δ √ n] > 1− √ n exp(−δ √ n/8). Proof. Fix a hypercube in the ε-net. Its volume is εd ≥ (1/n1/2d)d = 1/ √ n. A point lands in this hypercube with probability at least δ/ √ n. Let Y be the number of points in the hypercube. Then, Y is a sum of n independent Bernoullis and E[Y ] ≥ δ √ n. By a Chernoff bound (14, page 70), P[Y < δ √ n/2] ≤ P[Y < E[Y ]/2] < exp(−E[Y ]/8) ≤ exp(−δ √ n/8). By a union bound over the Nε clusters, P[some cluster has fewer than δ √ n/2 points] < Nε exp(−δ √ n/8) ≤ √ n exp(−δ √ n/8). The lemma follows by taking the complement event. For uniform D(x), δ = 1 and every cluster has at least 12 √ n points with high probability. We can now condition on this high probability event that every cluster is large. This means that a cluster’s ATT is an average of many ITEs, which by A5 concentrates at the expected effect for the hypercube. Recall that the expected effect in level c is defined as µc = ED[v − v̄|c]. We can assume, w.l.o.g., that µ0 < µ1 · · · < µℓ−1. Define νj as the expected average effect for points in the hypercube j and ATTj as the average ITE for points in cluster j. since every cluster is large, every cluster’s ATTj will be close to its expected average effect νj . More formally, Lemma 2. P[maxj |ATTj − νj | ≤ 2 √ log n/γδ √ n] ≥ 1− n−3/2 − √ n exp(−δ √ n/8). Proof. Conditioning on minj nj ≥ 12δ √ n and using A5, we have P [ |ATTj − νj | > 2 √ log n/γδ √ n ∣∣∣min j nj ≥ 12δ √ n ] ≤ exp(−2 log n) = 1/n2. By a union bound, P[maxj |ATTj − νj | > 2 √ log n/γδ √ n | minj nj ≥ 12δ √ n] ≤ Nε/n2. For any events A,B, by total probability, P[A] ≤ P[A | B] + P[B]. Therefore, P[max j |ATTj − νj | > 2 √ log n/γδ √ n] ≤ Nε/n2 + P[min j nj < 1 2δ √ n] To conclude the proof, use Nε ≤ √ n and Lemma 1. A hypercube in the ε-net is homogeneous if it only contains points of one level (the hypercube does not intersect the boundary between levels). Let Nc be the number of homogeneous hypercubes for level c and Nimpure be the number of hypercubes that are not homogeneous, i.e., impure. Lemma 3. Nimpure ≤ αερNε and Nc ≥ Nε(β/∆− αερ). Proof. A6 directly implies Nimpure ≤ αερNε. Only the pure level c or impure hypercubes can contain points in level c. Using A3 and εd = 1/Nε, we have β ≤ P[x ∈ Xc] ≤ (Nc +Nimpure)∆εd ≤ (Nc + αερNε)∆/Nε. The result follows after rearranging the above inequality. The main tools we need are Lemmas 2 and 3. Let us recap what we have. The cluster ATTs are close to the expected average effect in every hypercube. The number of impure hypercubes is an asymptotically negligible fraction of the hypercubes since ε ∈ O(1/n1/2d). Each level has an asymptotically constant fraction of homogeneous hypercubes. This means that almost all cluster ATTs will be close to a level’s expected effect, and every level will be well represented. Hence, if we optimally cluster the ATTs, with fewer than ℓ clusters, we won’t be able to get clustering error close to zero. With at least ℓ clusters, we will be able to get clustering error approaching zero. This is the content of the next lemma, which justifies step 3 in the algorithm. An optimal k-clustering of the cluster ATTs produces k centers θ1, . . . , θk and assigns each cluster ATTj to a center θ(ATTj) so that the average clustering error err(k) = ∑ j(ATTj − θ(ATTj))2/Nε is minimized. Given k, one can find an optimal k-clustering in O(N2ε k) time using O(Nε) space. Lemma 4. With probability at least 1−n−3/2− √ n exp(−δ √ n/8), optimal clustering of the ATTs with ℓ− 1 and ℓ clusters produces clustering errors which satisfy err(ℓ− 1) ≥ (β/∆− αϵρ) ( κ/2− 2 √ log n/γδ √ n )2 for logn√ n < κ 2γδ 16 err(ℓ) ≤ 14αε ρ(µℓ−1 − µ0)2 + 4 log n(1 + αερ)/γδ √ n Proof. With the stated probability, by Lemma 2, all ATTs are within 2 √ log n/γδ √ n of the expected effect for their respective hypercube. This, together with Lemma 3 is enough to prove the bounds. First, the upper bound on err(ℓ). Choose cluster centers µ0, . . . , µℓ−1, the expected effect for each level. This may not be optimal, so it gives an upper bound on the cluster error. Each homogeneous hypercube has a expected effect which is one of these levels, and its ATT is within 2 √ log n/γδ √ n of the corresponding µ. Assign each ATT for a homogeneous hypercube to its corresponding µ. The homogeneous hypercubes have total clustering error at most 4 log n(Nε − Nimpure)/γδ √ n. For an impure hypercube, the expected average effect is a convex combination of µ0, . . . , µℓ−1. Assign these ATTs to either µ0 or µℓ−1, with an error at most (2 √ log n/γδ √ n+ 12 (µℓ−1 − µ0)) 2. Thus, Nεerr(ℓ) ≤ 4 log n(Nε −Nimpure) γδ √ n +Nimpure(2 √ log n/γδ √ n+ 12 (µℓ−1 − µ0)) 2 ≤ 4 log n(Nε +Nimpure) γδ √ n + Nimpure(µℓ−1 − µ0)2 2 The upper bound follows after dividing by Nε and using Nimpure ≤ αερNε. Now, the lower bound on err(ℓ − 1). Consider any ℓ − 1 clustering of the ATTs with centers θ0, . . . , θℓ−2. At least Nc ≥ Nε(β/∆ − αϵρ) of the ATTs are within 2 √ log n/γδ √ n of µc. We also know that µc+1 − µc ≥ κ. Consider the ℓ disjoint intervals [µc − κ/2, µc + κ/2]. By the pigeonhole principle, at least one of these intervals [µc∗−κ/2, µc∗+κ/2] does not contain a center. Therefore all the ATTs associated to µc∗ will incur an error at least κ/2 − 2 √ log n/γδ √ n when κ/2 > 2 √ log n/γδ √ n. The total error is Nεerr(ℓ− 1) ≥ Nc∗ ( κ/2− 2 √ log n/γδ √ n )2 . Using Nc∗ ≥ Nε(β/∆− αϵρ) and dividing by Nε concludes the proof. Lemma 4 is crucial to estimating the number of levels. The error is βκ2/4∆(1+o(1)) for fewer than ℓ clusters and 14αε ρ(µℓ−1 − µ0)2(1 + o(1)) for ℓ or more clusters. Any function τ(n) that asymptotically separates these two errors can serve as an error threshold. The function should be agnostic to the parameters α, β, κ,∆, ρ, . . .. In practice, ρ = 1 and since ε ∼ 1/n1/2d, we have chosen τ(n) = log n/nρ/2d. Since err(ℓ − 1) is asymptotically constant, ℓ − 1 clusters can’t achieve error τ(n) (asymptotically). Since err(ℓ) ∈ O(ερ), ℓ clusters can achieve error τ(n) (asymptotically). Hence, choosing ℓ̂ as the minimum number of clusters that achieves error τ(n) will asymptotically output the correct number of clusters ℓ, with high probability, proving part (1) of Theorem 1. We now prove parts (2) and (3) of Theorem 1, which follow from the accuracy of steps 4 and 5 in the algorithm. We know the algorithm asymptotically selects the correct number of levels with high probability. We show that each level is populated by mostly the homogeneous clusters of that level. Lemma 5. With probability at least 1−n−3/2− √ n exp(−δ √ n/8), asymptotically in n, all the Nc ATTs from the homogeneous hypercubes of level c are assigned to the same cluster in the optimal clustering, and no ATTs from a different level’s homogeneous hypercubes is assigned to this cluster. Proof. Similar to the proof of Lemma 4, consider the ℓ disjoint intervals [µc − κ/4, µc + κ/4]. One center θc must be placed in this interval otherwise the clustering error is asymptotically constant, which is not optimal. All the ATTs for level c are (as n gets large) more than κ/2 away from any other center, and at most κ/2 away from θc, which means all these ATTs get assigned to θc. Similar to Lemma 1, we can get a high-probability upper bound of a √ n on the maximum number of points in a cluster. Asymptotically, the number of points in the impure clusters is nimpure ∈ O(ερ √ nNε). Suppose these impure points have expected average effect µ (a convex combination of the µc’s). The number of points in level c homogeneous clusters is nc ∈ Ω( √ nNε). Even if all impure points are added to level c, the expected average effect for the points in level c is E[ITE | assigned to level c] = nimpureµ+ ncµc nimpure + nc = µc +O(ε ρ). (3) Part (2) of Theorem 1 follows from the next lemma after setting ε ∼ 1/n1/2d and ρ = 1. Lemma 6. Estimate µ̂c as the average ITE for all points assigned to level c (the cth order statistic of the optimal centers θ0, . . . , θℓ̂−1). Then µ̂c = µc +O(ε ρ + √ log n/n) with probability 1− o(1). Proof. Apply a Chernoff bound. We are taking an average of proportional to n points with expectation in (3). This average will approximate the expectation to within √ log n/n with probability 1− o(1). The details are very similar to the proof of Lemma 2, so we omit them. Part (3) of Theorem 1 now follows because all but the O(ερ) fraction of points in the impure clusters are assigned a correct expected effect. An additional fine-tuning leads to as much as 2× improvement in experiments. For each point, consider the ε-hypercube centered on that point. By a Chernoff bound, each of these n hypercubes has Θ( √ n) points, as in Lemma 1. All but a fraction O(ερ) of these are impure. Assign each point to the center θc that best matches its hypercube-“smoothed” ITE, giving new subpopulations Xc and corresponding subpopulation-effects µ̂c. This EM-style update can be iterated. Our simulations show the results for one E-M update. 4 DEMONSTRATION ON SYNTHETIC DATA We use a 2-dimensional synthetic experiment with three levels to demonstrate our pre-cluster and merge algorithm (PCM). Alternatives to pre-clustering include state-of-the-art methods that directly predict the effect such as meta-learners, and the Bayes optimal classifier based on ITEs. All methods used a base gradient boosting forest with 400 trees to estimate counterfactuals. The subpopulations in our experiment are shown in Figure 1, where black is effect-level 0, gray is level 1 and white is level 2. We present detailed results with n = 200K. Extensive results can be found in the appendix. Let us briefly describe the two existing benchmarks we will compare against. X-learner (12), is a meta-learner that estimates heterogeneous treatment effects directly from ITEs. For the outcome and effect models of X-Learner we use a base gradient boosting learner with 400 estimators (6) implemented in scikit-learn (16). For the propensity model we use logistic regression. Bayes Optimal uses the ITEs to reconstruct the subpopulations, given the number of levels and the ground-truth outcome distribution y(t, c) from Figure 1. The Bayes optimal classifier is: cBayes = 0 if ITE ≤ 0.5, cBayes = 1 if 0.5 < ITE ≤ 1.5, cBayes = 2 if 1.5 < ITE. We also use these thresholds to reconstruct subpopulations for X-learner’s predicted ITEs. Note: Neither the thresholds nor the number of levels are available in practice. We compare the benchmark subpopulations reconstructed with these thresholds to further showcase the power of our algorithm’s subpopulations, which outperform the competition without access to the forbidden information. Let ci be the level of subject i and ÎTEi the estimated ITE. The error is |µci − ÎTEi|, and we report the mean absolute error in the table below. Our algorithm predicts a level ĉi and uses its associated effect µ̂ĉi as ÎTEi. The other methods predict ITE directly for which we compute mean absolute error. As mentioned above, we also show the error for the optimally reconstructed subpopulations, which is not possible in practice, but included for comparison (red emphasizes not available in practice). n PCM (this work) X-Learner Bayes Optimal Subpopulations Predicted-ITE Subpopulations Raw-ITE 20K 0.35±0.39 3.04 ± 1.11 3.07 ± 2.41 4.57 ± 1.33 4.59 ± 3.49 200k 0.109±0.22 1.44 ± 0.83 1.50 ± 1.38 4.22 ± 1.28 4.24 ± 3.22 2M 0.036±0.13 0.34 ± 0.47 0.46 ± 0.56 4.01 ± 1.25 4.03 ± 3.05 Our algorithm is about 10× better than existing benchmarks even though we do not use the forbidden information (number of levels and optimal thresholds). It is also clear that X-learner is significantly better than Bayes optimal with just the raw ITEs. The next table shows subpopulation effects, again red indicates the use of forbidden information on the number of levels and optimal thresholds. The ground truth effects are µ0 = 0, µ1 = 1, µ2 = 2. n PCM (this work) X-Learner Bayes Optimal µ̂0 µ̂1 µ̂2 µ̂0 µ̂1 µ̂2 µ̂0 µ̂1 µ̂2 20K -0.21 0.91 2.07 -2.5 0.99 4.44 -3.94 1.00 5.99 200K 0.06 0.963 1.95 -1.16 1.01 2.87 -3.62 1.00 5.61 2M 0.04 0.996 1.993 -0.26 0.99 2.07 -3.41 1.00 5.41 Note that µ̂1 for X-learner and Bayes optimal are accurate, an artefact of knowing the optimal thresholds (not realizable in practice). A detailed comparison of our algorithm (PCM) with X-Learner and Bayes optimal subpopulations is shown in Figure 2. PCM clearly extracts the correct subpopulations. X-Learner and Bayes optimal, even given the number of levels and optimal thresholds, does not come visually close to PCM. Note, X-learner does display some structure but Bayes optimal on just the ITEs is a disaster. This is further illustrated in the ITE-histograms in the second row. PCM clearly shows three levels, where as X-learner ITEs and the raw ITEs suggest just one high variance level. The 3rd row shows the confusion matrices for subpopulation assignment. The red indicates use of information forbidden in practice, however we include it for comparison. The confusion matrix for PCM without forbidden information clearly dominates the other methods which use forbidden information. The high noise in the outcomes undermines the other methods, while PCM is robust. In high noise settings, direct use of the ITEs without some form of pre-clustering fails. Summary of experiments with synthetic data. Our algorithm accurately extracts subpopulations at different effect-levels. Analysis of individual treatment effects fails when there is noise. Our experiments show that practice follows the theory (more detailed experiments, including how cluster homogeneity converges to 1, are shown in the appendix). We note that there is a curse of dimensionality, namely the convergence is at a rate O(n−1/2d). 5 CONCLUSION Our work amplifies the realm of causal analysis to non-targeted trials where the treated population can consist of large subpopulations with different effects. Our algorithm uses a plug-and-play precluster and merge strategy that provably untangles the different effects. Experiments on synthetic data show a 10× or more improvement over existing HTE-benchmarks. In our analysis, we did not attempt to optimize the rate of convergence. Optimizing this rate could lead to improved algorithms. Our work allows causal effects analysis to be used in settings such as health interventions, where wide deployment over a mostly healthy population would mask the effect on the sick population. Our methods can seemlessly untangle the effects without knowledge of what sick and healthy mean. This line of algorithms can also help in identifying inequities between the subpopulations. One significant contribution is to reduce the untangling of subpopulation effects to a 1-dim clustering problem which we solve efficently. This approach may be of independent interest beyond causaleffect analysis. The effect is just a function that takes on ℓ levels. Our approach can be used to learn any function that takes on a finite number of levels. It could also be used to learn a piecewise approximation to an arbitrary continuous function on a compact set. A APPENDIX We provide more detailed experimental results, specifically results for different n (20K, 200K and 2M) and a comparison of different clustering methods in the pre-clustering phase: box-only, PCM (box plus 1 step of E-M improvement) and K-means. To calculate the counterfactual for treated subjects, we train a gradient boosted forest on the control population. B CONVERGENCE WITH n B.1 RECONSTRUCTED SUBPOPULATIONS We show subpopulation reconstructions for n ∈ {20K, 200K, 2M}. PCM (this work) X-Learner Bayes Optimal 20k 200k 2M Even with just 20K points in this very noisy setting, PCM is able to extract some meaningful subpopulation structure, while none of the other methods can. B.2 ITE HISTOGRAMS We show the ITE histograms for n ∈ {20K, 200K, 2M}. PCM (our work) X-Learner ITE 20k 1 0 1 2 3 ITE-PCM 0 50 100 10 5 0 5 10 ITE-XLEARNER 0 50 100 150 200 10 0 10 ITE 0 50 100 150 200 200k 0 1 2 ITE-PCM 0 500 1000 5.0 2.5 0.0 2.5 5.0 ITE-XLEARNER 0 1000 2000 3000 10 0 10 ITE 0 500 1000 1500 2000 2M 0 1 2 ITE-PCM 0 5000 10000 15000 1 0 1 2 3 ITE-XLEARNER 0 20000 40000 60000 80000 10 0 10 ITE 0 5000 10000 15000 20000 C DIFFERENT PRE-CLUSTERING METHODS We show the reconstructed subpopulations and effect errors for different pre-clustering methods. Box-clustering without any E-M step is also provably consistent. Our algorithm PCM uses boxclustering followed by an E-M step to improve the subpopulations using smoothed ITEs. We also show K-means pre-clustering, for which we did not prove any theoretical guarantees. Reconstruction. PCM (this work) BOX KMEANS 20k 200k 2M Histograms. PCM (our work) BOX KMEANS 20k 1 0 1 2 3 ITE-PCM 0 50 100 0 2 ATE 0 5 10 0 2 ATE 0 5 10 15 200k 0 1 2 ITE-PCM 0 500 1000 0 1 2 ATE 0 5 10 15 20 0 1 2 ATE 0 5 10 15 20 2M 0 1 2 ITE-PCM 0 5000 10000 15000 0 1 2 ATE 0 20 40 60 0 1 2 ATE 0 20 40 60 Error Table. n PCM (this work) BOX KMEANS 20K 0.35±0.39 0.50 ± 0.52 0.54 ± 0.50 200k 0.109±0.22 0.17 ± 0.35 0.20 ± 0.37 2M 0.036±0.13 0.078 ± 0.214 0.065 ± 0.20 D CLUSTER HOMOGENEITY To further show how practice reflects the theory, we plot average cluster homogeneity versus n. The cluster homogeneity is the fraction of points in a cluster that are from its majority level. Our entire methodology relies on the pre-clustering step producing a vast majority of homogeneous clusters. The rapid convergence to homogeneous clusters enables us to identify the correct subpopulations and the corresponding effects via pre-cluster and merge. 102 103 104 105 106 Number of Points 0.6 0.7 0.8 0.9 1.0 M ea n Hm g. Box
1. What is the main contribution of the paper regarding statistical methods for non-targeted trials? 2. What are the strengths and weaknesses of the proposed algorithm, particularly in its practical significance and comparison to existing methods? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the paper's focus on non-targeted trials and its relevance to real-world clinical situations?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The purpose of this paper is to provide statistical method to handle the so-called non-targeted trials in which one can’t control the selection of treatment units. Therefore, “some (often most) treated subjects are healthy (p.2)”, and this greatly confounds the estimation of ATT. This paper provides asymptotically consistent algorithm for accurately estimating the treatment effect, and demonstrate its value numerically with synthetic data. Strengths And Weaknesses A stronger justification of the practical significance of non-targeted trials could be greatly beneficial. In particular, although self-selection is indeed a well-known in e.g. natural experiments on digital platforms, it is a bit difficult for me to image self-selection problem would occur in clinical trials, where experimental units are recruited through a careful and rigorous process where doctors are usually involved. Therefore, perhaps a concrete clinical situation should be carefully elaborated, or relevant literature should be cited in the paper to substantiate the importance of this somewhat non-canonical situation. It is also important to show the situation of non-targeted trial is sufficiently common, so that any new method being developed for this situation is worth much attention. The algorithm in Section 2 looks like a algorithm for estimating heterogeneous treatment effect, but HTE is fundamentally different from the clinical sick/healthy subjects setting because usually it is pretty clear whether a patient is sick or healthy. The algorithm is a combination of cluster analysis and off-the-shelf estimation of individual ITE. Both methods are well known, so synthetic analysis could’ve taken less proportion because it is unlikely that the two methods would go wrong, especially the simulation setting is simplistic with dimension 2. On the other hand, real data analysis is necessary in order to evidence the value of the algorithm in application. Clarity, Quality, Novelty And Reproducibility The writing could've been further polished, as many places use non-standard language, and others are somewhat confusing. It could be better to introduce the algorithm first before going through its theoretical properties. The proof can be differed to the appendix.
ICLR
Title Untangling Effect and Side Effect: Consistent Causal Inference in Non-Targeted Trials Abstract A treatment is usually appropriate for some group (the “sick” group) on whom it has an effect, but it can also have a side-effect when given to subjects from another group (the “healthy” group). In a non-targeted trial both sick and healthy subjects may be treated, producing heterogeneous effects within the treated group. Inferring the correct treatment effect on the sick population is then difficult, because the effect and side-effect are tangled. We propose an efficient nonparametric approach to untangling the effect and side-effect, called PCM (pre-cluster and merge). We prove its asymptotic consistency in a general setting and show, on synthetic data, more than a 10x improvement in accuracy over existing state-of-the-art. N/A A treatment is usually appropriate for some group (the “sick” group) on whom it has an effect, but it can also have a side-effect when given to subjects from another group (the “healthy” group). In a non-targeted trial both sick and healthy subjects may be treated, producing heterogeneous effects within the treated group. Inferring the correct treatment effect on the sick population is then difficult, because the effect and side-effect are tangled. We propose an efficient nonparametric approach to untangling the effect and side-effect, called PCM (pre-cluster and merge). We prove its asymptotic consistency in a general setting and show, on synthetic data, more than a 10x improvement in accuracy over existing state-of-the-art. 1 INTRODUCTION A standard approach to causal effect estimation is the targeted randomized controlled trial (RCT), see (8; 13; 15; 17; 23). To test a treatment’s effect on a sick population, subjects are recruited and admitted into the trial based on eligibility criteria designed to identify sick subjects. The trial subjects are then randomly split into a treated group that receives the treatment and a control group that receives the best alternative treatment (or a placebo). “Targeted” means only sick individuals are admitted into the trial via the eligibility criteria, with the implicit assumption that only a single treatment-effect is to be estimated. This ignores the possibility of treated subgroups among the sick population with heterogeneous effects. Further, one often does not have the luxury of a targeted RCT. For example, eligibility criteria for admittance to the trial may not unambiguously identify sick subjects, or one may not be able to control who gets into the trial. When the treatment is not exclusively applied on sick subjects, we say the trial is non-targeted and new methods are needed to extract the treatment effect on the sick, (25). Non-targeted trials are the norm whenever subjects self-select into an intervention, which is often the case across domains stretching from healthcare to advertising. We propose a nonparametric approach to causal inference in non-targeted trials, based on a pre-cluster and merge strategy. Assume a population is broken into ℓ groups with different expected treatment effects in each group. Identify each group with the level of its treatment effect, so there are effect levels c = 0, 1, . . . , ℓ−1. For example, a population’s subjects can be healthy, c = 0, or sick, c = 1. We use the RubinNeyman potential outcome framework, (19). A subject is a tuple s = (x, c, t, y) sampled from a distribution D, where x ∈ [0, 1]d is a feature-vector such as [age, weight], c indicates the subject’s level, t indicates the subjects treatment cohort, and y is the observed outcome. The observed outcome is one of two potential outcomes, v if treated or v̄ if not treated. We consider strongly ignorable trials: given x, the propensity to treat is strictly between 0 and 1 and the potential outcomes {v, v̄} depend only on x, independent of t. In a strongly ignorable trial, one can use the features to identify counterfactual controls for estimating effect. The level c is central to the scope of our work. Mathematically, c is a hidden effect modifier which determines the distribution of the potential outcomes (c is an unknown and possibly complex function of x). The level c dichotomizes the feature space into subpopulations with different effects. One tries to design the eligibility criteria for the trial to ensure that the propensity to treat is non-zero only for subjects in one level. What to do when the eligibility criteria allow more than one level into the trial is exactly the problem we address. Though our work applies to a general number of levels, all the main ideas can be illustrated with just two levels, c ∈ {0, 1}. For the sake of concreteness, we denote these two levels healthy and sick. A trial samples n subjects, s1, . . . , sn. If subject i is treated, ti = 1 and the observed outcome yi = vi, otherwise ti = 0, and the observed outcome is v̄i (consistency). The treated group is T = {i | ti = 1}, the control group is C = {i | ti = 0}, and the sick group is S = {i | ci = 1}. Our task is to determine if the treatment works on the sick, and if there is any side-effect on the healthy. We wish to estimate the effect and side-effect, defined as EFF = ED[v − v̄ | c = 1] (1) SIDE-EFF = ED[v − v̄ | c = 0]. Most prior work estimates EFF using the average treatment effect for the treated, the ATT (1), ATT = averagei∈T (vi)− averagei∈T (v̄i), (2) which assumes all treated subjects are sick. There are several complications with this approach. (i) Suppose a subject is treated with probability p(x, c), the propensity to treat. For a non-uniform propensity to treat, the treated group has a selection bias, and ATT is a biased estimate of EFF. Ways to address this bias include inverse propensity weighting, (18), matched controls, (1), and learning the outcome function y(x, t), see for example (2; 3; 10; 12; 22; 23). Alternatively, one can simply ignore this bias and accept that ATT is estimating E[v − v̄ | t = 1]. (ii) The second term on the RHS in (2) can’t be computed because we don’t know the counterfactual v̄ for treated subjects. Much of causal inference deals with accurate unbiased estimation of averagei∈T (v̄i), (4; 9). Our goal is not to improve counterfactual estimation. Hence, in our experiments, we use off-the-shelf counterfactual estimators. (iii) (Focus of our work) The trial is non-targeted and some (often most) treated subjects are healthy. To highlight the challenge in (iii) above, consider a simple case with uniform propensity to treat, p(x, c) = p. Conditioning on at least one treated subject, E[ATT] = P[sick]× EFF + P[healthy]× SIDE-EFF. The ATT is a mix of effect and side effect and is therefore biased when the treatment effect is heterogeneous across levels. In many settings, for example healthcare, P[sick] ≪ P[healthy] and the bias is extreme, rendering ATT useless. Increasing the number of subjects won’t resolve this bias. State-of-the-art causal inference packages provide methods to compute ATT, specifically aimed at accurate estimates of the counterfactual averagei∈T (v̄i), (5; 21). These packages suffer from the mixing bias above. We propose a fix which can be used as an add-on to these packages. Our Contribution. Our main result is an asymptotically consistent distribution independent algorithm to extract the correct effect levels and associated subpopulations in non-targeted trials, when the number of effect-levels is unknown. Our main result is Theorem 1. Assume a non-targeted trial has a treated group with n subjects sampled from an unknown distribution D. There is an algorithm which identifies ℓ̂ effect-levels with estimated expected effect µ̂c in level c, and assigns each subject si to a level ĉi which, under mild technical conditions, satisfies: Theorem 1. All of the following hold with probability 1− o(1): (1) ℓ̂ = ℓ, i.e., the correct number of effect levels ℓ is identified. (2) µ̂c = E[v − v̄ | c] + o(1), i.e., the effect at each level is estimated accurately. (3) The fraction of subjects assigned the correct effect level is 1 − o(1). The effect level ĉi is correct if µĉi matches, to within o(1), the expected treatment effect for the subject. For the formal assumptions, see Section 3. Parts (1) and (2) say the algorithm extracts the correct number of levels and their expected effects. Part (3) says the correct subpopulations for each level are extracted. Knowing the correct subpopulations is useful for post processing, for example to understand the effects in terms of the features. Our algorithm satisfying Theorem 1 is given in Section 2. The algorithm uses an unsupervised pre-cluster and merge strategy which reduces the task of estimating the effect-levels to a 1-dimensional optimal clustering problem that provably extracts the correct levels asymptotically as n → ∞. Our algorithm assumes an unbiased estimator of counterfactuals, for example some established method (5; 21). In practice, this means one can control for confounders. If unbiased counterfactual estimation is not possible, then any form of causal effect analysis is doomed. Our primary goal is untangling the heterogeneous effect levels, hence we use an off-the-shelf gradient boosting algorithm to get counterfactuals in our experiments (5). We demonstrate that our algorithm’s performance on synthetic data matches the theory. Subpopulation effect-analysis is a special case of heterogeneous treatment effects (HTE), (12; 20; 23). Hence, we also compare with X-Learner, a state-of-the art algorithm for HTE (12) and Bayes optimal prediction of effect-level. In comparison to X-Learner, our algorithm extracts visually better subpopulations, and has an accuracy that is more than 10× better for estimating per-subject expected effects. Note, HTE algorithms do not extract subpopulations with effect-levels. They predict effect given the features x. One can, however, try to infer subpopulations from predicted effects. Our algorithm also significantly outperforms Bayes optimal based on individual effects, which suggests that some form of pre-cluster and merge strategy is necessary. This need for some form of clustering has been independently observed in (11, chapter 4) who studies a variety of clustering approaches in a non-distribution independent setting with a known number of levels. 2 ALGORITHM: PRE-CLUSTER AND MERGE FOR SUBPOPULATION EFFECTS (PCM) Our algorithm uses a nonparametric pre-cluster and merge strategy that achieves asymptotic consistency without any user-specified hyperparameters. The inputs are the n subjects s1, . . . , sn, where {si}ni=1 = {(xi, ti, yi, ȳi)}ni=1. Note, both the factual yi and counterfactual ȳi are inputs to the algorithm. To use the algorithm in practice, of course, the counterfactual must be estimated, and for our demonstrations we use an out-of-the-box gradient boosting regression algorithm from (7; 16) to estimate counterfactuals. Inaccuracy in counterfactual estimation will be accommodated in our analysis. The need to estimate counterfactuals does impact the algorithm in practice, due to an asymmetry in most trials: the treated population is much smaller than the controls. Hence, one might be able to estimate counterfactuals for the treated population but not for the controls due to lack of coverage by the (small) treated population. In this case, our algorithm is only run on the treated population. It is convenient to define individual treatment effects ITEi = (yi − ȳi)(2ti − 1), where yi is the observed factual and ȳi the counterfactual (2ti − 1 = ±1 ensuring that the effect computed is for treatment versus no treatment). There are five main steps. 1: [PRE-CLUSTER] Cluster the xi into K ∈ O( √ n) clusters Z1, . . . , ZK . 2: Compute ATT for each cluster Zj , ATTj = averagexi∈Zj ITEi. 3: [MERGE] Group the {ATTj}Kj=1 into ℓ̂ effect-levels, merging the clusters at each level to get subpopulations X0, X1, . . . , Xℓ̂−1. (Xc is the union of all clusters at level c.) 4: Compute subpopulation effects µ̂c = averagexi∈Xc ITEi, for c = 0, . . . , ℓ̂− 1. 5: Assign subjects to effect levels, update the populations Xc and expected effects µ̂c. We now elaborate on the intuition and details for each step in the algorithm. Step 1. The clusters in the pre-clustering step play two roles. The first is to denoise individual effects using in-cluster averaging. The second is to group like with like, that is clusters should be homogeneous, containing only subjects from one effect-level. This means each cluster-ATT will accurately estimate a single level’s effect (we do not know which). We allow for any clustering algorithm. However, our theoretical analysis (for simplicity) uses a specific algorithm, boxclustering, based on an ε-net of the feature space. One could also use a standard clustering algorithm such as K-means. We compare box-clustering with K-means in the appendix. Step 2. Denoising of the individual effects using in-cluster averaging. Assuming clusters are homogeneous, each cluster ATT will approximate some level’s effect. Step 3. Assuming the effects in different levels are well separated, this separation gets emphasized in the cluster-ATTs, provided clusters are homogeneous. Hence, we can identify effect-levels from the clusters with similar effects, and merge those clusters into subpopulations. Two tasks must be solved. Finding the number of subpopulations ℓ̂ and then optimally grouping the clusters into ℓ̂ subpopulations. To find the subpopulations, we use ℓ̂-means with squared 1-dim clustering error. Our algorithm sets ℓ̂ to achieve an ℓ̂-means error at most log n/n1/2d. So, optimal 1-dim clustering error(ℓ̂− 1) > log n/n1/2d optimal 1-dim clustering error(ℓ̂) ≤ log n/n1/2d Simultaneously finding ℓ̂ and optimally partitioning the clusters into ℓ̂ groups can be solved using a standard dynamic programming algorithm in O(K2ℓ̂) time using O(K) space (24). Note, our algorithm will identify the number of effect levels provided such distinct subpopulations exist in the data. If it is known that only two subpopulations exist, sick and healthy, then ℓ̂ can be hard-coded to 2. Step 4. Assuming each cluster is homogeneous and clusters with similar effects found in step 3 are from the same effect-level, the subpopulations formed by merging the clusters with similar effects will be nearly homogeneous. Hence, the subpopulation-ATTs will be accurate estimates of the effects at each level. Step 5. Each subject xi is implicitly assigned a level ĉi based on the subpopulation Xc to which it belongs. However, we can do better. By considering the √ n nearest neighbors to xi, we can obtain a smoothed effect for xi. We use this smoothed effect to place xi into the subpopulation whose effect matches best, hence placing xi into a level. Unfortunately, running this algorithm for all n subjects is costly, needing sophisticated data structures to reduce the expected run time below O(n2). As an alternative, we center an (1/n1/2d)-hypercube on xi and smooth xi’s effect using the average effect over points in this hypercube. This approach requires O(n √ n) run time to obtain the effect-level for all subjects, significantly better than O(n2) when n is large. Once the effect-levels for all subjects are obtained, one can update the subpopulations Xc and the corresponding effect-estimates µ̂c. The run time of the algorithm is O(nℓ+ n √ n) (expected and with high probability) and the output is nearly homogeneous subpopulations which can now be post-processed. An example of useful post-processing is a feature-based explanation of the subpopulation-memberships. Note that we still do not know which subpopulation(s) are the sick ones, hence we cannot say which is the effect and which is the side effect. A post-processing oracle would make this determination. For example, a doctor in a medical trial would identify the sick groups from subpopulation-demographics. Note. The optimal 1-d clustering can be done directly on the smoothed ITEs from the (1/n1/2d)hypercubes centered on each xi, using the same thresholds in step 3. One still gets asymptotic consistency, however the price is an increased run time to O(n2ℓ). This is prohibitive for large n. 3 ASYMPTOTIC CONSISTENCY: PROOF OF THEOREM 1 To prove consistency, we must make our assumptions precise. In some cases the assumptions are stronger than needed, for simplicity of exposition. A1. The feature space X is [0, 1]d and the marginal feature-distribution is uniform, D(x) = 1. More generally, X is compact and D(x) is bounded, 0 < δ ≤ D(x) ≤ ∆ (can be relaxed). A2. The level c is an unknown function of the feature x, c = h(x). Potential effects depend only on c. Conditioning on c, effects are well separated. Let µc = ED[v − v̄|c]. Then, |µc − µc′ | ≥ κ for c ̸= c′ A3. Define the subpopulation for level c as Xc = h−1(c). Each subpopulation has positive measure, P[x ∈ Xc] = βc ≥ β > 0. A4. For a treated subject xi with outcome yi, it is possible to produce an unbiased estimate of the counterfactual outcome ȳi. Effectively, we are assuming an unbiased estimate of the individual treatment effect ITEi = yi − ȳi is available. Any causality analysis requires some estimate of counterfactuals and, in practice, one typically gets counterfactuals from the untreated subjects after controlling for confounders (5; 21). A5. Sample averages concentrate. Essentially, the estimated ITEs are independent. This is true in practice because the subjects are independent and the counterfactual estimates use a predictor learned from the independent control population. For m i.i.d. subjects, let the average of the estimated ITEs be ν̂ and the expectation of this average be ν. Then, P[|ν̂ − ν| > ϵ] ≤ e−γmϵ 2 . The parameter γ > 0 is related to distributional properties of the estimated ITEs. Higher variance ITE estimates result in γ being smaller. Concentration is a mild technical assumption requiring the estimated effects to be unbiased well behaved random variables, to which a central limit theorem applies. Bounded effects or normally distributed effects suffice for concentration. A6. The boundary between the subpopulations has small measure. Essentially we require that two subjects that have very similar features will belong to the same level with high probability (the function c = h(x) is not a “random” function). Again, this is a mild technical assumption which is taken for granted in practice. Let us make the assumption more precise. Define an ε-net to be a subdivision of X into (1/ε)d disjoint hypercubes of side ε. A hypercube of an ε-net is impure if it contains points from multiple subpopulations. Let Nimpure be the number of impure hypercubes in an ε-net. Then εdNimpure ≤ αερ, where ρ > 0 and α is a constant. Note, d− ρ is the boxing-dimension of the boundary. In most problems, ρ = 1. A7. We use box-clustering for the first step in the algorithm. Given n, define ε(n) = 1/ ⌊ n1/2d ⌋ . All points in a hypercube of an ε(n)-net form a cluster. Note that the number of clusters is approximately √ n. The expected number of points in a cluster is nε(n)d ≈ √ n. We prove Theorem 1 via a sequence of lemmas. The feature space X = [0, 1]d is partitioned into levels X0, . . . , Xℓ−1, where Xc = h−1(c) is the set of points whose level is c. Define an ε-net that partitions X into Nε = ε−d hypercubes of equal volume εd, where ε is the side-length of the hypercube. Set ε = 1/ ⌊ n1/2d ⌋ . Then, Nε = √ n(1 − O(d/n1/2d)) ∼ √ n. Each hypercube in the ε-net defines a cluster for the pre-clustering stage. There are about √ n clusters and, since D(x) is uniform, there are about √ n points in each cluster. Index the clusters in the ε-net by j ∈ {1, . . . , Nε} and define nj as the number of points in cluster j. Formally, we have, Lemma 1. Suppose D(x) ≥ δ > 0. Then, P[minj nj ≥ 12δ √ n] > 1− √ n exp(−δ √ n/8). Proof. Fix a hypercube in the ε-net. Its volume is εd ≥ (1/n1/2d)d = 1/ √ n. A point lands in this hypercube with probability at least δ/ √ n. Let Y be the number of points in the hypercube. Then, Y is a sum of n independent Bernoullis and E[Y ] ≥ δ √ n. By a Chernoff bound (14, page 70), P[Y < δ √ n/2] ≤ P[Y < E[Y ]/2] < exp(−E[Y ]/8) ≤ exp(−δ √ n/8). By a union bound over the Nε clusters, P[some cluster has fewer than δ √ n/2 points] < Nε exp(−δ √ n/8) ≤ √ n exp(−δ √ n/8). The lemma follows by taking the complement event. For uniform D(x), δ = 1 and every cluster has at least 12 √ n points with high probability. We can now condition on this high probability event that every cluster is large. This means that a cluster’s ATT is an average of many ITEs, which by A5 concentrates at the expected effect for the hypercube. Recall that the expected effect in level c is defined as µc = ED[v − v̄|c]. We can assume, w.l.o.g., that µ0 < µ1 · · · < µℓ−1. Define νj as the expected average effect for points in the hypercube j and ATTj as the average ITE for points in cluster j. since every cluster is large, every cluster’s ATTj will be close to its expected average effect νj . More formally, Lemma 2. P[maxj |ATTj − νj | ≤ 2 √ log n/γδ √ n] ≥ 1− n−3/2 − √ n exp(−δ √ n/8). Proof. Conditioning on minj nj ≥ 12δ √ n and using A5, we have P [ |ATTj − νj | > 2 √ log n/γδ √ n ∣∣∣min j nj ≥ 12δ √ n ] ≤ exp(−2 log n) = 1/n2. By a union bound, P[maxj |ATTj − νj | > 2 √ log n/γδ √ n | minj nj ≥ 12δ √ n] ≤ Nε/n2. For any events A,B, by total probability, P[A] ≤ P[A | B] + P[B]. Therefore, P[max j |ATTj − νj | > 2 √ log n/γδ √ n] ≤ Nε/n2 + P[min j nj < 1 2δ √ n] To conclude the proof, use Nε ≤ √ n and Lemma 1. A hypercube in the ε-net is homogeneous if it only contains points of one level (the hypercube does not intersect the boundary between levels). Let Nc be the number of homogeneous hypercubes for level c and Nimpure be the number of hypercubes that are not homogeneous, i.e., impure. Lemma 3. Nimpure ≤ αερNε and Nc ≥ Nε(β/∆− αερ). Proof. A6 directly implies Nimpure ≤ αερNε. Only the pure level c or impure hypercubes can contain points in level c. Using A3 and εd = 1/Nε, we have β ≤ P[x ∈ Xc] ≤ (Nc +Nimpure)∆εd ≤ (Nc + αερNε)∆/Nε. The result follows after rearranging the above inequality. The main tools we need are Lemmas 2 and 3. Let us recap what we have. The cluster ATTs are close to the expected average effect in every hypercube. The number of impure hypercubes is an asymptotically negligible fraction of the hypercubes since ε ∈ O(1/n1/2d). Each level has an asymptotically constant fraction of homogeneous hypercubes. This means that almost all cluster ATTs will be close to a level’s expected effect, and every level will be well represented. Hence, if we optimally cluster the ATTs, with fewer than ℓ clusters, we won’t be able to get clustering error close to zero. With at least ℓ clusters, we will be able to get clustering error approaching zero. This is the content of the next lemma, which justifies step 3 in the algorithm. An optimal k-clustering of the cluster ATTs produces k centers θ1, . . . , θk and assigns each cluster ATTj to a center θ(ATTj) so that the average clustering error err(k) = ∑ j(ATTj − θ(ATTj))2/Nε is minimized. Given k, one can find an optimal k-clustering in O(N2ε k) time using O(Nε) space. Lemma 4. With probability at least 1−n−3/2− √ n exp(−δ √ n/8), optimal clustering of the ATTs with ℓ− 1 and ℓ clusters produces clustering errors which satisfy err(ℓ− 1) ≥ (β/∆− αϵρ) ( κ/2− 2 √ log n/γδ √ n )2 for logn√ n < κ 2γδ 16 err(ℓ) ≤ 14αε ρ(µℓ−1 − µ0)2 + 4 log n(1 + αερ)/γδ √ n Proof. With the stated probability, by Lemma 2, all ATTs are within 2 √ log n/γδ √ n of the expected effect for their respective hypercube. This, together with Lemma 3 is enough to prove the bounds. First, the upper bound on err(ℓ). Choose cluster centers µ0, . . . , µℓ−1, the expected effect for each level. This may not be optimal, so it gives an upper bound on the cluster error. Each homogeneous hypercube has a expected effect which is one of these levels, and its ATT is within 2 √ log n/γδ √ n of the corresponding µ. Assign each ATT for a homogeneous hypercube to its corresponding µ. The homogeneous hypercubes have total clustering error at most 4 log n(Nε − Nimpure)/γδ √ n. For an impure hypercube, the expected average effect is a convex combination of µ0, . . . , µℓ−1. Assign these ATTs to either µ0 or µℓ−1, with an error at most (2 √ log n/γδ √ n+ 12 (µℓ−1 − µ0)) 2. Thus, Nεerr(ℓ) ≤ 4 log n(Nε −Nimpure) γδ √ n +Nimpure(2 √ log n/γδ √ n+ 12 (µℓ−1 − µ0)) 2 ≤ 4 log n(Nε +Nimpure) γδ √ n + Nimpure(µℓ−1 − µ0)2 2 The upper bound follows after dividing by Nε and using Nimpure ≤ αερNε. Now, the lower bound on err(ℓ − 1). Consider any ℓ − 1 clustering of the ATTs with centers θ0, . . . , θℓ−2. At least Nc ≥ Nε(β/∆ − αϵρ) of the ATTs are within 2 √ log n/γδ √ n of µc. We also know that µc+1 − µc ≥ κ. Consider the ℓ disjoint intervals [µc − κ/2, µc + κ/2]. By the pigeonhole principle, at least one of these intervals [µc∗−κ/2, µc∗+κ/2] does not contain a center. Therefore all the ATTs associated to µc∗ will incur an error at least κ/2 − 2 √ log n/γδ √ n when κ/2 > 2 √ log n/γδ √ n. The total error is Nεerr(ℓ− 1) ≥ Nc∗ ( κ/2− 2 √ log n/γδ √ n )2 . Using Nc∗ ≥ Nε(β/∆− αϵρ) and dividing by Nε concludes the proof. Lemma 4 is crucial to estimating the number of levels. The error is βκ2/4∆(1+o(1)) for fewer than ℓ clusters and 14αε ρ(µℓ−1 − µ0)2(1 + o(1)) for ℓ or more clusters. Any function τ(n) that asymptotically separates these two errors can serve as an error threshold. The function should be agnostic to the parameters α, β, κ,∆, ρ, . . .. In practice, ρ = 1 and since ε ∼ 1/n1/2d, we have chosen τ(n) = log n/nρ/2d. Since err(ℓ − 1) is asymptotically constant, ℓ − 1 clusters can’t achieve error τ(n) (asymptotically). Since err(ℓ) ∈ O(ερ), ℓ clusters can achieve error τ(n) (asymptotically). Hence, choosing ℓ̂ as the minimum number of clusters that achieves error τ(n) will asymptotically output the correct number of clusters ℓ, with high probability, proving part (1) of Theorem 1. We now prove parts (2) and (3) of Theorem 1, which follow from the accuracy of steps 4 and 5 in the algorithm. We know the algorithm asymptotically selects the correct number of levels with high probability. We show that each level is populated by mostly the homogeneous clusters of that level. Lemma 5. With probability at least 1−n−3/2− √ n exp(−δ √ n/8), asymptotically in n, all the Nc ATTs from the homogeneous hypercubes of level c are assigned to the same cluster in the optimal clustering, and no ATTs from a different level’s homogeneous hypercubes is assigned to this cluster. Proof. Similar to the proof of Lemma 4, consider the ℓ disjoint intervals [µc − κ/4, µc + κ/4]. One center θc must be placed in this interval otherwise the clustering error is asymptotically constant, which is not optimal. All the ATTs for level c are (as n gets large) more than κ/2 away from any other center, and at most κ/2 away from θc, which means all these ATTs get assigned to θc. Similar to Lemma 1, we can get a high-probability upper bound of a √ n on the maximum number of points in a cluster. Asymptotically, the number of points in the impure clusters is nimpure ∈ O(ερ √ nNε). Suppose these impure points have expected average effect µ (a convex combination of the µc’s). The number of points in level c homogeneous clusters is nc ∈ Ω( √ nNε). Even if all impure points are added to level c, the expected average effect for the points in level c is E[ITE | assigned to level c] = nimpureµ+ ncµc nimpure + nc = µc +O(ε ρ). (3) Part (2) of Theorem 1 follows from the next lemma after setting ε ∼ 1/n1/2d and ρ = 1. Lemma 6. Estimate µ̂c as the average ITE for all points assigned to level c (the cth order statistic of the optimal centers θ0, . . . , θℓ̂−1). Then µ̂c = µc +O(ε ρ + √ log n/n) with probability 1− o(1). Proof. Apply a Chernoff bound. We are taking an average of proportional to n points with expectation in (3). This average will approximate the expectation to within √ log n/n with probability 1− o(1). The details are very similar to the proof of Lemma 2, so we omit them. Part (3) of Theorem 1 now follows because all but the O(ερ) fraction of points in the impure clusters are assigned a correct expected effect. An additional fine-tuning leads to as much as 2× improvement in experiments. For each point, consider the ε-hypercube centered on that point. By a Chernoff bound, each of these n hypercubes has Θ( √ n) points, as in Lemma 1. All but a fraction O(ερ) of these are impure. Assign each point to the center θc that best matches its hypercube-“smoothed” ITE, giving new subpopulations Xc and corresponding subpopulation-effects µ̂c. This EM-style update can be iterated. Our simulations show the results for one E-M update. 4 DEMONSTRATION ON SYNTHETIC DATA We use a 2-dimensional synthetic experiment with three levels to demonstrate our pre-cluster and merge algorithm (PCM). Alternatives to pre-clustering include state-of-the-art methods that directly predict the effect such as meta-learners, and the Bayes optimal classifier based on ITEs. All methods used a base gradient boosting forest with 400 trees to estimate counterfactuals. The subpopulations in our experiment are shown in Figure 1, where black is effect-level 0, gray is level 1 and white is level 2. We present detailed results with n = 200K. Extensive results can be found in the appendix. Let us briefly describe the two existing benchmarks we will compare against. X-learner (12), is a meta-learner that estimates heterogeneous treatment effects directly from ITEs. For the outcome and effect models of X-Learner we use a base gradient boosting learner with 400 estimators (6) implemented in scikit-learn (16). For the propensity model we use logistic regression. Bayes Optimal uses the ITEs to reconstruct the subpopulations, given the number of levels and the ground-truth outcome distribution y(t, c) from Figure 1. The Bayes optimal classifier is: cBayes = 0 if ITE ≤ 0.5, cBayes = 1 if 0.5 < ITE ≤ 1.5, cBayes = 2 if 1.5 < ITE. We also use these thresholds to reconstruct subpopulations for X-learner’s predicted ITEs. Note: Neither the thresholds nor the number of levels are available in practice. We compare the benchmark subpopulations reconstructed with these thresholds to further showcase the power of our algorithm’s subpopulations, which outperform the competition without access to the forbidden information. Let ci be the level of subject i and ÎTEi the estimated ITE. The error is |µci − ÎTEi|, and we report the mean absolute error in the table below. Our algorithm predicts a level ĉi and uses its associated effect µ̂ĉi as ÎTEi. The other methods predict ITE directly for which we compute mean absolute error. As mentioned above, we also show the error for the optimally reconstructed subpopulations, which is not possible in practice, but included for comparison (red emphasizes not available in practice). n PCM (this work) X-Learner Bayes Optimal Subpopulations Predicted-ITE Subpopulations Raw-ITE 20K 0.35±0.39 3.04 ± 1.11 3.07 ± 2.41 4.57 ± 1.33 4.59 ± 3.49 200k 0.109±0.22 1.44 ± 0.83 1.50 ± 1.38 4.22 ± 1.28 4.24 ± 3.22 2M 0.036±0.13 0.34 ± 0.47 0.46 ± 0.56 4.01 ± 1.25 4.03 ± 3.05 Our algorithm is about 10× better than existing benchmarks even though we do not use the forbidden information (number of levels and optimal thresholds). It is also clear that X-learner is significantly better than Bayes optimal with just the raw ITEs. The next table shows subpopulation effects, again red indicates the use of forbidden information on the number of levels and optimal thresholds. The ground truth effects are µ0 = 0, µ1 = 1, µ2 = 2. n PCM (this work) X-Learner Bayes Optimal µ̂0 µ̂1 µ̂2 µ̂0 µ̂1 µ̂2 µ̂0 µ̂1 µ̂2 20K -0.21 0.91 2.07 -2.5 0.99 4.44 -3.94 1.00 5.99 200K 0.06 0.963 1.95 -1.16 1.01 2.87 -3.62 1.00 5.61 2M 0.04 0.996 1.993 -0.26 0.99 2.07 -3.41 1.00 5.41 Note that µ̂1 for X-learner and Bayes optimal are accurate, an artefact of knowing the optimal thresholds (not realizable in practice). A detailed comparison of our algorithm (PCM) with X-Learner and Bayes optimal subpopulations is shown in Figure 2. PCM clearly extracts the correct subpopulations. X-Learner and Bayes optimal, even given the number of levels and optimal thresholds, does not come visually close to PCM. Note, X-learner does display some structure but Bayes optimal on just the ITEs is a disaster. This is further illustrated in the ITE-histograms in the second row. PCM clearly shows three levels, where as X-learner ITEs and the raw ITEs suggest just one high variance level. The 3rd row shows the confusion matrices for subpopulation assignment. The red indicates use of information forbidden in practice, however we include it for comparison. The confusion matrix for PCM without forbidden information clearly dominates the other methods which use forbidden information. The high noise in the outcomes undermines the other methods, while PCM is robust. In high noise settings, direct use of the ITEs without some form of pre-clustering fails. Summary of experiments with synthetic data. Our algorithm accurately extracts subpopulations at different effect-levels. Analysis of individual treatment effects fails when there is noise. Our experiments show that practice follows the theory (more detailed experiments, including how cluster homogeneity converges to 1, are shown in the appendix). We note that there is a curse of dimensionality, namely the convergence is at a rate O(n−1/2d). 5 CONCLUSION Our work amplifies the realm of causal analysis to non-targeted trials where the treated population can consist of large subpopulations with different effects. Our algorithm uses a plug-and-play precluster and merge strategy that provably untangles the different effects. Experiments on synthetic data show a 10× or more improvement over existing HTE-benchmarks. In our analysis, we did not attempt to optimize the rate of convergence. Optimizing this rate could lead to improved algorithms. Our work allows causal effects analysis to be used in settings such as health interventions, where wide deployment over a mostly healthy population would mask the effect on the sick population. Our methods can seemlessly untangle the effects without knowledge of what sick and healthy mean. This line of algorithms can also help in identifying inequities between the subpopulations. One significant contribution is to reduce the untangling of subpopulation effects to a 1-dim clustering problem which we solve efficently. This approach may be of independent interest beyond causaleffect analysis. The effect is just a function that takes on ℓ levels. Our approach can be used to learn any function that takes on a finite number of levels. It could also be used to learn a piecewise approximation to an arbitrary continuous function on a compact set. A APPENDIX We provide more detailed experimental results, specifically results for different n (20K, 200K and 2M) and a comparison of different clustering methods in the pre-clustering phase: box-only, PCM (box plus 1 step of E-M improvement) and K-means. To calculate the counterfactual for treated subjects, we train a gradient boosted forest on the control population. B CONVERGENCE WITH n B.1 RECONSTRUCTED SUBPOPULATIONS We show subpopulation reconstructions for n ∈ {20K, 200K, 2M}. PCM (this work) X-Learner Bayes Optimal 20k 200k 2M Even with just 20K points in this very noisy setting, PCM is able to extract some meaningful subpopulation structure, while none of the other methods can. B.2 ITE HISTOGRAMS We show the ITE histograms for n ∈ {20K, 200K, 2M}. PCM (our work) X-Learner ITE 20k 1 0 1 2 3 ITE-PCM 0 50 100 10 5 0 5 10 ITE-XLEARNER 0 50 100 150 200 10 0 10 ITE 0 50 100 150 200 200k 0 1 2 ITE-PCM 0 500 1000 5.0 2.5 0.0 2.5 5.0 ITE-XLEARNER 0 1000 2000 3000 10 0 10 ITE 0 500 1000 1500 2000 2M 0 1 2 ITE-PCM 0 5000 10000 15000 1 0 1 2 3 ITE-XLEARNER 0 20000 40000 60000 80000 10 0 10 ITE 0 5000 10000 15000 20000 C DIFFERENT PRE-CLUSTERING METHODS We show the reconstructed subpopulations and effect errors for different pre-clustering methods. Box-clustering without any E-M step is also provably consistent. Our algorithm PCM uses boxclustering followed by an E-M step to improve the subpopulations using smoothed ITEs. We also show K-means pre-clustering, for which we did not prove any theoretical guarantees. Reconstruction. PCM (this work) BOX KMEANS 20k 200k 2M Histograms. PCM (our work) BOX KMEANS 20k 1 0 1 2 3 ITE-PCM 0 50 100 0 2 ATE 0 5 10 0 2 ATE 0 5 10 15 200k 0 1 2 ITE-PCM 0 500 1000 0 1 2 ATE 0 5 10 15 20 0 1 2 ATE 0 5 10 15 20 2M 0 1 2 ITE-PCM 0 5000 10000 15000 0 1 2 ATE 0 20 40 60 0 1 2 ATE 0 20 40 60 Error Table. n PCM (this work) BOX KMEANS 20K 0.35±0.39 0.50 ± 0.52 0.54 ± 0.50 200k 0.109±0.22 0.17 ± 0.35 0.20 ± 0.37 2M 0.036±0.13 0.078 ± 0.214 0.065 ± 0.20 D CLUSTER HOMOGENEITY To further show how practice reflects the theory, we plot average cluster homogeneity versus n. The cluster homogeneity is the fraction of points in a cluster that are from its majority level. Our entire methodology relies on the pre-clustering step producing a vast majority of homogeneous clusters. The rapid convergence to homogeneous clusters enables us to identify the correct subpopulations and the corresponding effects via pre-cluster and merge. 102 103 104 105 106 Number of Points 0.6 0.7 0.8 0.9 1.0 M ea n Hm g. Box
1. What is the focus and contribution of the paper on disambiguating heterogeneous treatment effects? 2. What are the strengths of the proposed algorithm, particularly in its simplicity and robustness? 3. What are the weaknesses of the paper regarding its exposition and related work section? 4. Do you have any concerns about the terminology used in the paper, such as "side effects" and "ATT"? 5. Are there any questions regarding the experimental results and potential baselines for comparison? 6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper proposes an algorithm (pre-cluster and merge) to better disambiguate heterogenous treatment effects in non-targeted clinical trials (where there is a hidden confounder of the patient of whether the patient was sick/healthy/other confounder variable-- with same observable patient properties/covariates otherwise. Strengths And Weaknesses Strengths Simple methods that works well. Proof of theorem 1 is methodical and clear. Robust to different clustering algorithms Weaknesses Paper lacks refinement in exposition. For instance, I think a causal graph could have illustrative; defining groups (sick, health etc.) as a hidden confounder. I think cleaning this up and making this clearer would be helpful. The paper lacks a coherent related work section; would appreciate more related work for instance how this relates to estimating HTEs with hidden confounder. Minor Comments Side effects doesn't seem like the appropriate nomenclature. ATT instead of ATE or CATE if conditional Questions Experiments demonstrating performance in imbalanced dataset with respect to the groups would be useful Are there no baselines that are able to incorporate to incorporate using the HTE to identify groups? Clarity, Quality, Novelty And Reproducibility Paper is clear, seems like the PCM algorithm is relatively novel in this context (although the context of the entire fields it is difficult to discern given the lack of comprehensive related work section. No code to reproduce but should be easy.
ICLR
Title Untangling Effect and Side Effect: Consistent Causal Inference in Non-Targeted Trials Abstract A treatment is usually appropriate for some group (the “sick” group) on whom it has an effect, but it can also have a side-effect when given to subjects from another group (the “healthy” group). In a non-targeted trial both sick and healthy subjects may be treated, producing heterogeneous effects within the treated group. Inferring the correct treatment effect on the sick population is then difficult, because the effect and side-effect are tangled. We propose an efficient nonparametric approach to untangling the effect and side-effect, called PCM (pre-cluster and merge). We prove its asymptotic consistency in a general setting and show, on synthetic data, more than a 10x improvement in accuracy over existing state-of-the-art. N/A A treatment is usually appropriate for some group (the “sick” group) on whom it has an effect, but it can also have a side-effect when given to subjects from another group (the “healthy” group). In a non-targeted trial both sick and healthy subjects may be treated, producing heterogeneous effects within the treated group. Inferring the correct treatment effect on the sick population is then difficult, because the effect and side-effect are tangled. We propose an efficient nonparametric approach to untangling the effect and side-effect, called PCM (pre-cluster and merge). We prove its asymptotic consistency in a general setting and show, on synthetic data, more than a 10x improvement in accuracy over existing state-of-the-art. 1 INTRODUCTION A standard approach to causal effect estimation is the targeted randomized controlled trial (RCT), see (8; 13; 15; 17; 23). To test a treatment’s effect on a sick population, subjects are recruited and admitted into the trial based on eligibility criteria designed to identify sick subjects. The trial subjects are then randomly split into a treated group that receives the treatment and a control group that receives the best alternative treatment (or a placebo). “Targeted” means only sick individuals are admitted into the trial via the eligibility criteria, with the implicit assumption that only a single treatment-effect is to be estimated. This ignores the possibility of treated subgroups among the sick population with heterogeneous effects. Further, one often does not have the luxury of a targeted RCT. For example, eligibility criteria for admittance to the trial may not unambiguously identify sick subjects, or one may not be able to control who gets into the trial. When the treatment is not exclusively applied on sick subjects, we say the trial is non-targeted and new methods are needed to extract the treatment effect on the sick, (25). Non-targeted trials are the norm whenever subjects self-select into an intervention, which is often the case across domains stretching from healthcare to advertising. We propose a nonparametric approach to causal inference in non-targeted trials, based on a pre-cluster and merge strategy. Assume a population is broken into ℓ groups with different expected treatment effects in each group. Identify each group with the level of its treatment effect, so there are effect levels c = 0, 1, . . . , ℓ−1. For example, a population’s subjects can be healthy, c = 0, or sick, c = 1. We use the RubinNeyman potential outcome framework, (19). A subject is a tuple s = (x, c, t, y) sampled from a distribution D, where x ∈ [0, 1]d is a feature-vector such as [age, weight], c indicates the subject’s level, t indicates the subjects treatment cohort, and y is the observed outcome. The observed outcome is one of two potential outcomes, v if treated or v̄ if not treated. We consider strongly ignorable trials: given x, the propensity to treat is strictly between 0 and 1 and the potential outcomes {v, v̄} depend only on x, independent of t. In a strongly ignorable trial, one can use the features to identify counterfactual controls for estimating effect. The level c is central to the scope of our work. Mathematically, c is a hidden effect modifier which determines the distribution of the potential outcomes (c is an unknown and possibly complex function of x). The level c dichotomizes the feature space into subpopulations with different effects. One tries to design the eligibility criteria for the trial to ensure that the propensity to treat is non-zero only for subjects in one level. What to do when the eligibility criteria allow more than one level into the trial is exactly the problem we address. Though our work applies to a general number of levels, all the main ideas can be illustrated with just two levels, c ∈ {0, 1}. For the sake of concreteness, we denote these two levels healthy and sick. A trial samples n subjects, s1, . . . , sn. If subject i is treated, ti = 1 and the observed outcome yi = vi, otherwise ti = 0, and the observed outcome is v̄i (consistency). The treated group is T = {i | ti = 1}, the control group is C = {i | ti = 0}, and the sick group is S = {i | ci = 1}. Our task is to determine if the treatment works on the sick, and if there is any side-effect on the healthy. We wish to estimate the effect and side-effect, defined as EFF = ED[v − v̄ | c = 1] (1) SIDE-EFF = ED[v − v̄ | c = 0]. Most prior work estimates EFF using the average treatment effect for the treated, the ATT (1), ATT = averagei∈T (vi)− averagei∈T (v̄i), (2) which assumes all treated subjects are sick. There are several complications with this approach. (i) Suppose a subject is treated with probability p(x, c), the propensity to treat. For a non-uniform propensity to treat, the treated group has a selection bias, and ATT is a biased estimate of EFF. Ways to address this bias include inverse propensity weighting, (18), matched controls, (1), and learning the outcome function y(x, t), see for example (2; 3; 10; 12; 22; 23). Alternatively, one can simply ignore this bias and accept that ATT is estimating E[v − v̄ | t = 1]. (ii) The second term on the RHS in (2) can’t be computed because we don’t know the counterfactual v̄ for treated subjects. Much of causal inference deals with accurate unbiased estimation of averagei∈T (v̄i), (4; 9). Our goal is not to improve counterfactual estimation. Hence, in our experiments, we use off-the-shelf counterfactual estimators. (iii) (Focus of our work) The trial is non-targeted and some (often most) treated subjects are healthy. To highlight the challenge in (iii) above, consider a simple case with uniform propensity to treat, p(x, c) = p. Conditioning on at least one treated subject, E[ATT] = P[sick]× EFF + P[healthy]× SIDE-EFF. The ATT is a mix of effect and side effect and is therefore biased when the treatment effect is heterogeneous across levels. In many settings, for example healthcare, P[sick] ≪ P[healthy] and the bias is extreme, rendering ATT useless. Increasing the number of subjects won’t resolve this bias. State-of-the-art causal inference packages provide methods to compute ATT, specifically aimed at accurate estimates of the counterfactual averagei∈T (v̄i), (5; 21). These packages suffer from the mixing bias above. We propose a fix which can be used as an add-on to these packages. Our Contribution. Our main result is an asymptotically consistent distribution independent algorithm to extract the correct effect levels and associated subpopulations in non-targeted trials, when the number of effect-levels is unknown. Our main result is Theorem 1. Assume a non-targeted trial has a treated group with n subjects sampled from an unknown distribution D. There is an algorithm which identifies ℓ̂ effect-levels with estimated expected effect µ̂c in level c, and assigns each subject si to a level ĉi which, under mild technical conditions, satisfies: Theorem 1. All of the following hold with probability 1− o(1): (1) ℓ̂ = ℓ, i.e., the correct number of effect levels ℓ is identified. (2) µ̂c = E[v − v̄ | c] + o(1), i.e., the effect at each level is estimated accurately. (3) The fraction of subjects assigned the correct effect level is 1 − o(1). The effect level ĉi is correct if µĉi matches, to within o(1), the expected treatment effect for the subject. For the formal assumptions, see Section 3. Parts (1) and (2) say the algorithm extracts the correct number of levels and their expected effects. Part (3) says the correct subpopulations for each level are extracted. Knowing the correct subpopulations is useful for post processing, for example to understand the effects in terms of the features. Our algorithm satisfying Theorem 1 is given in Section 2. The algorithm uses an unsupervised pre-cluster and merge strategy which reduces the task of estimating the effect-levels to a 1-dimensional optimal clustering problem that provably extracts the correct levels asymptotically as n → ∞. Our algorithm assumes an unbiased estimator of counterfactuals, for example some established method (5; 21). In practice, this means one can control for confounders. If unbiased counterfactual estimation is not possible, then any form of causal effect analysis is doomed. Our primary goal is untangling the heterogeneous effect levels, hence we use an off-the-shelf gradient boosting algorithm to get counterfactuals in our experiments (5). We demonstrate that our algorithm’s performance on synthetic data matches the theory. Subpopulation effect-analysis is a special case of heterogeneous treatment effects (HTE), (12; 20; 23). Hence, we also compare with X-Learner, a state-of-the art algorithm for HTE (12) and Bayes optimal prediction of effect-level. In comparison to X-Learner, our algorithm extracts visually better subpopulations, and has an accuracy that is more than 10× better for estimating per-subject expected effects. Note, HTE algorithms do not extract subpopulations with effect-levels. They predict effect given the features x. One can, however, try to infer subpopulations from predicted effects. Our algorithm also significantly outperforms Bayes optimal based on individual effects, which suggests that some form of pre-cluster and merge strategy is necessary. This need for some form of clustering has been independently observed in (11, chapter 4) who studies a variety of clustering approaches in a non-distribution independent setting with a known number of levels. 2 ALGORITHM: PRE-CLUSTER AND MERGE FOR SUBPOPULATION EFFECTS (PCM) Our algorithm uses a nonparametric pre-cluster and merge strategy that achieves asymptotic consistency without any user-specified hyperparameters. The inputs are the n subjects s1, . . . , sn, where {si}ni=1 = {(xi, ti, yi, ȳi)}ni=1. Note, both the factual yi and counterfactual ȳi are inputs to the algorithm. To use the algorithm in practice, of course, the counterfactual must be estimated, and for our demonstrations we use an out-of-the-box gradient boosting regression algorithm from (7; 16) to estimate counterfactuals. Inaccuracy in counterfactual estimation will be accommodated in our analysis. The need to estimate counterfactuals does impact the algorithm in practice, due to an asymmetry in most trials: the treated population is much smaller than the controls. Hence, one might be able to estimate counterfactuals for the treated population but not for the controls due to lack of coverage by the (small) treated population. In this case, our algorithm is only run on the treated population. It is convenient to define individual treatment effects ITEi = (yi − ȳi)(2ti − 1), where yi is the observed factual and ȳi the counterfactual (2ti − 1 = ±1 ensuring that the effect computed is for treatment versus no treatment). There are five main steps. 1: [PRE-CLUSTER] Cluster the xi into K ∈ O( √ n) clusters Z1, . . . , ZK . 2: Compute ATT for each cluster Zj , ATTj = averagexi∈Zj ITEi. 3: [MERGE] Group the {ATTj}Kj=1 into ℓ̂ effect-levels, merging the clusters at each level to get subpopulations X0, X1, . . . , Xℓ̂−1. (Xc is the union of all clusters at level c.) 4: Compute subpopulation effects µ̂c = averagexi∈Xc ITEi, for c = 0, . . . , ℓ̂− 1. 5: Assign subjects to effect levels, update the populations Xc and expected effects µ̂c. We now elaborate on the intuition and details for each step in the algorithm. Step 1. The clusters in the pre-clustering step play two roles. The first is to denoise individual effects using in-cluster averaging. The second is to group like with like, that is clusters should be homogeneous, containing only subjects from one effect-level. This means each cluster-ATT will accurately estimate a single level’s effect (we do not know which). We allow for any clustering algorithm. However, our theoretical analysis (for simplicity) uses a specific algorithm, boxclustering, based on an ε-net of the feature space. One could also use a standard clustering algorithm such as K-means. We compare box-clustering with K-means in the appendix. Step 2. Denoising of the individual effects using in-cluster averaging. Assuming clusters are homogeneous, each cluster ATT will approximate some level’s effect. Step 3. Assuming the effects in different levels are well separated, this separation gets emphasized in the cluster-ATTs, provided clusters are homogeneous. Hence, we can identify effect-levels from the clusters with similar effects, and merge those clusters into subpopulations. Two tasks must be solved. Finding the number of subpopulations ℓ̂ and then optimally grouping the clusters into ℓ̂ subpopulations. To find the subpopulations, we use ℓ̂-means with squared 1-dim clustering error. Our algorithm sets ℓ̂ to achieve an ℓ̂-means error at most log n/n1/2d. So, optimal 1-dim clustering error(ℓ̂− 1) > log n/n1/2d optimal 1-dim clustering error(ℓ̂) ≤ log n/n1/2d Simultaneously finding ℓ̂ and optimally partitioning the clusters into ℓ̂ groups can be solved using a standard dynamic programming algorithm in O(K2ℓ̂) time using O(K) space (24). Note, our algorithm will identify the number of effect levels provided such distinct subpopulations exist in the data. If it is known that only two subpopulations exist, sick and healthy, then ℓ̂ can be hard-coded to 2. Step 4. Assuming each cluster is homogeneous and clusters with similar effects found in step 3 are from the same effect-level, the subpopulations formed by merging the clusters with similar effects will be nearly homogeneous. Hence, the subpopulation-ATTs will be accurate estimates of the effects at each level. Step 5. Each subject xi is implicitly assigned a level ĉi based on the subpopulation Xc to which it belongs. However, we can do better. By considering the √ n nearest neighbors to xi, we can obtain a smoothed effect for xi. We use this smoothed effect to place xi into the subpopulation whose effect matches best, hence placing xi into a level. Unfortunately, running this algorithm for all n subjects is costly, needing sophisticated data structures to reduce the expected run time below O(n2). As an alternative, we center an (1/n1/2d)-hypercube on xi and smooth xi’s effect using the average effect over points in this hypercube. This approach requires O(n √ n) run time to obtain the effect-level for all subjects, significantly better than O(n2) when n is large. Once the effect-levels for all subjects are obtained, one can update the subpopulations Xc and the corresponding effect-estimates µ̂c. The run time of the algorithm is O(nℓ+ n √ n) (expected and with high probability) and the output is nearly homogeneous subpopulations which can now be post-processed. An example of useful post-processing is a feature-based explanation of the subpopulation-memberships. Note that we still do not know which subpopulation(s) are the sick ones, hence we cannot say which is the effect and which is the side effect. A post-processing oracle would make this determination. For example, a doctor in a medical trial would identify the sick groups from subpopulation-demographics. Note. The optimal 1-d clustering can be done directly on the smoothed ITEs from the (1/n1/2d)hypercubes centered on each xi, using the same thresholds in step 3. One still gets asymptotic consistency, however the price is an increased run time to O(n2ℓ). This is prohibitive for large n. 3 ASYMPTOTIC CONSISTENCY: PROOF OF THEOREM 1 To prove consistency, we must make our assumptions precise. In some cases the assumptions are stronger than needed, for simplicity of exposition. A1. The feature space X is [0, 1]d and the marginal feature-distribution is uniform, D(x) = 1. More generally, X is compact and D(x) is bounded, 0 < δ ≤ D(x) ≤ ∆ (can be relaxed). A2. The level c is an unknown function of the feature x, c = h(x). Potential effects depend only on c. Conditioning on c, effects are well separated. Let µc = ED[v − v̄|c]. Then, |µc − µc′ | ≥ κ for c ̸= c′ A3. Define the subpopulation for level c as Xc = h−1(c). Each subpopulation has positive measure, P[x ∈ Xc] = βc ≥ β > 0. A4. For a treated subject xi with outcome yi, it is possible to produce an unbiased estimate of the counterfactual outcome ȳi. Effectively, we are assuming an unbiased estimate of the individual treatment effect ITEi = yi − ȳi is available. Any causality analysis requires some estimate of counterfactuals and, in practice, one typically gets counterfactuals from the untreated subjects after controlling for confounders (5; 21). A5. Sample averages concentrate. Essentially, the estimated ITEs are independent. This is true in practice because the subjects are independent and the counterfactual estimates use a predictor learned from the independent control population. For m i.i.d. subjects, let the average of the estimated ITEs be ν̂ and the expectation of this average be ν. Then, P[|ν̂ − ν| > ϵ] ≤ e−γmϵ 2 . The parameter γ > 0 is related to distributional properties of the estimated ITEs. Higher variance ITE estimates result in γ being smaller. Concentration is a mild technical assumption requiring the estimated effects to be unbiased well behaved random variables, to which a central limit theorem applies. Bounded effects or normally distributed effects suffice for concentration. A6. The boundary between the subpopulations has small measure. Essentially we require that two subjects that have very similar features will belong to the same level with high probability (the function c = h(x) is not a “random” function). Again, this is a mild technical assumption which is taken for granted in practice. Let us make the assumption more precise. Define an ε-net to be a subdivision of X into (1/ε)d disjoint hypercubes of side ε. A hypercube of an ε-net is impure if it contains points from multiple subpopulations. Let Nimpure be the number of impure hypercubes in an ε-net. Then εdNimpure ≤ αερ, where ρ > 0 and α is a constant. Note, d− ρ is the boxing-dimension of the boundary. In most problems, ρ = 1. A7. We use box-clustering for the first step in the algorithm. Given n, define ε(n) = 1/ ⌊ n1/2d ⌋ . All points in a hypercube of an ε(n)-net form a cluster. Note that the number of clusters is approximately √ n. The expected number of points in a cluster is nε(n)d ≈ √ n. We prove Theorem 1 via a sequence of lemmas. The feature space X = [0, 1]d is partitioned into levels X0, . . . , Xℓ−1, where Xc = h−1(c) is the set of points whose level is c. Define an ε-net that partitions X into Nε = ε−d hypercubes of equal volume εd, where ε is the side-length of the hypercube. Set ε = 1/ ⌊ n1/2d ⌋ . Then, Nε = √ n(1 − O(d/n1/2d)) ∼ √ n. Each hypercube in the ε-net defines a cluster for the pre-clustering stage. There are about √ n clusters and, since D(x) is uniform, there are about √ n points in each cluster. Index the clusters in the ε-net by j ∈ {1, . . . , Nε} and define nj as the number of points in cluster j. Formally, we have, Lemma 1. Suppose D(x) ≥ δ > 0. Then, P[minj nj ≥ 12δ √ n] > 1− √ n exp(−δ √ n/8). Proof. Fix a hypercube in the ε-net. Its volume is εd ≥ (1/n1/2d)d = 1/ √ n. A point lands in this hypercube with probability at least δ/ √ n. Let Y be the number of points in the hypercube. Then, Y is a sum of n independent Bernoullis and E[Y ] ≥ δ √ n. By a Chernoff bound (14, page 70), P[Y < δ √ n/2] ≤ P[Y < E[Y ]/2] < exp(−E[Y ]/8) ≤ exp(−δ √ n/8). By a union bound over the Nε clusters, P[some cluster has fewer than δ √ n/2 points] < Nε exp(−δ √ n/8) ≤ √ n exp(−δ √ n/8). The lemma follows by taking the complement event. For uniform D(x), δ = 1 and every cluster has at least 12 √ n points with high probability. We can now condition on this high probability event that every cluster is large. This means that a cluster’s ATT is an average of many ITEs, which by A5 concentrates at the expected effect for the hypercube. Recall that the expected effect in level c is defined as µc = ED[v − v̄|c]. We can assume, w.l.o.g., that µ0 < µ1 · · · < µℓ−1. Define νj as the expected average effect for points in the hypercube j and ATTj as the average ITE for points in cluster j. since every cluster is large, every cluster’s ATTj will be close to its expected average effect νj . More formally, Lemma 2. P[maxj |ATTj − νj | ≤ 2 √ log n/γδ √ n] ≥ 1− n−3/2 − √ n exp(−δ √ n/8). Proof. Conditioning on minj nj ≥ 12δ √ n and using A5, we have P [ |ATTj − νj | > 2 √ log n/γδ √ n ∣∣∣min j nj ≥ 12δ √ n ] ≤ exp(−2 log n) = 1/n2. By a union bound, P[maxj |ATTj − νj | > 2 √ log n/γδ √ n | minj nj ≥ 12δ √ n] ≤ Nε/n2. For any events A,B, by total probability, P[A] ≤ P[A | B] + P[B]. Therefore, P[max j |ATTj − νj | > 2 √ log n/γδ √ n] ≤ Nε/n2 + P[min j nj < 1 2δ √ n] To conclude the proof, use Nε ≤ √ n and Lemma 1. A hypercube in the ε-net is homogeneous if it only contains points of one level (the hypercube does not intersect the boundary between levels). Let Nc be the number of homogeneous hypercubes for level c and Nimpure be the number of hypercubes that are not homogeneous, i.e., impure. Lemma 3. Nimpure ≤ αερNε and Nc ≥ Nε(β/∆− αερ). Proof. A6 directly implies Nimpure ≤ αερNε. Only the pure level c or impure hypercubes can contain points in level c. Using A3 and εd = 1/Nε, we have β ≤ P[x ∈ Xc] ≤ (Nc +Nimpure)∆εd ≤ (Nc + αερNε)∆/Nε. The result follows after rearranging the above inequality. The main tools we need are Lemmas 2 and 3. Let us recap what we have. The cluster ATTs are close to the expected average effect in every hypercube. The number of impure hypercubes is an asymptotically negligible fraction of the hypercubes since ε ∈ O(1/n1/2d). Each level has an asymptotically constant fraction of homogeneous hypercubes. This means that almost all cluster ATTs will be close to a level’s expected effect, and every level will be well represented. Hence, if we optimally cluster the ATTs, with fewer than ℓ clusters, we won’t be able to get clustering error close to zero. With at least ℓ clusters, we will be able to get clustering error approaching zero. This is the content of the next lemma, which justifies step 3 in the algorithm. An optimal k-clustering of the cluster ATTs produces k centers θ1, . . . , θk and assigns each cluster ATTj to a center θ(ATTj) so that the average clustering error err(k) = ∑ j(ATTj − θ(ATTj))2/Nε is minimized. Given k, one can find an optimal k-clustering in O(N2ε k) time using O(Nε) space. Lemma 4. With probability at least 1−n−3/2− √ n exp(−δ √ n/8), optimal clustering of the ATTs with ℓ− 1 and ℓ clusters produces clustering errors which satisfy err(ℓ− 1) ≥ (β/∆− αϵρ) ( κ/2− 2 √ log n/γδ √ n )2 for logn√ n < κ 2γδ 16 err(ℓ) ≤ 14αε ρ(µℓ−1 − µ0)2 + 4 log n(1 + αερ)/γδ √ n Proof. With the stated probability, by Lemma 2, all ATTs are within 2 √ log n/γδ √ n of the expected effect for their respective hypercube. This, together with Lemma 3 is enough to prove the bounds. First, the upper bound on err(ℓ). Choose cluster centers µ0, . . . , µℓ−1, the expected effect for each level. This may not be optimal, so it gives an upper bound on the cluster error. Each homogeneous hypercube has a expected effect which is one of these levels, and its ATT is within 2 √ log n/γδ √ n of the corresponding µ. Assign each ATT for a homogeneous hypercube to its corresponding µ. The homogeneous hypercubes have total clustering error at most 4 log n(Nε − Nimpure)/γδ √ n. For an impure hypercube, the expected average effect is a convex combination of µ0, . . . , µℓ−1. Assign these ATTs to either µ0 or µℓ−1, with an error at most (2 √ log n/γδ √ n+ 12 (µℓ−1 − µ0)) 2. Thus, Nεerr(ℓ) ≤ 4 log n(Nε −Nimpure) γδ √ n +Nimpure(2 √ log n/γδ √ n+ 12 (µℓ−1 − µ0)) 2 ≤ 4 log n(Nε +Nimpure) γδ √ n + Nimpure(µℓ−1 − µ0)2 2 The upper bound follows after dividing by Nε and using Nimpure ≤ αερNε. Now, the lower bound on err(ℓ − 1). Consider any ℓ − 1 clustering of the ATTs with centers θ0, . . . , θℓ−2. At least Nc ≥ Nε(β/∆ − αϵρ) of the ATTs are within 2 √ log n/γδ √ n of µc. We also know that µc+1 − µc ≥ κ. Consider the ℓ disjoint intervals [µc − κ/2, µc + κ/2]. By the pigeonhole principle, at least one of these intervals [µc∗−κ/2, µc∗+κ/2] does not contain a center. Therefore all the ATTs associated to µc∗ will incur an error at least κ/2 − 2 √ log n/γδ √ n when κ/2 > 2 √ log n/γδ √ n. The total error is Nεerr(ℓ− 1) ≥ Nc∗ ( κ/2− 2 √ log n/γδ √ n )2 . Using Nc∗ ≥ Nε(β/∆− αϵρ) and dividing by Nε concludes the proof. Lemma 4 is crucial to estimating the number of levels. The error is βκ2/4∆(1+o(1)) for fewer than ℓ clusters and 14αε ρ(µℓ−1 − µ0)2(1 + o(1)) for ℓ or more clusters. Any function τ(n) that asymptotically separates these two errors can serve as an error threshold. The function should be agnostic to the parameters α, β, κ,∆, ρ, . . .. In practice, ρ = 1 and since ε ∼ 1/n1/2d, we have chosen τ(n) = log n/nρ/2d. Since err(ℓ − 1) is asymptotically constant, ℓ − 1 clusters can’t achieve error τ(n) (asymptotically). Since err(ℓ) ∈ O(ερ), ℓ clusters can achieve error τ(n) (asymptotically). Hence, choosing ℓ̂ as the minimum number of clusters that achieves error τ(n) will asymptotically output the correct number of clusters ℓ, with high probability, proving part (1) of Theorem 1. We now prove parts (2) and (3) of Theorem 1, which follow from the accuracy of steps 4 and 5 in the algorithm. We know the algorithm asymptotically selects the correct number of levels with high probability. We show that each level is populated by mostly the homogeneous clusters of that level. Lemma 5. With probability at least 1−n−3/2− √ n exp(−δ √ n/8), asymptotically in n, all the Nc ATTs from the homogeneous hypercubes of level c are assigned to the same cluster in the optimal clustering, and no ATTs from a different level’s homogeneous hypercubes is assigned to this cluster. Proof. Similar to the proof of Lemma 4, consider the ℓ disjoint intervals [µc − κ/4, µc + κ/4]. One center θc must be placed in this interval otherwise the clustering error is asymptotically constant, which is not optimal. All the ATTs for level c are (as n gets large) more than κ/2 away from any other center, and at most κ/2 away from θc, which means all these ATTs get assigned to θc. Similar to Lemma 1, we can get a high-probability upper bound of a √ n on the maximum number of points in a cluster. Asymptotically, the number of points in the impure clusters is nimpure ∈ O(ερ √ nNε). Suppose these impure points have expected average effect µ (a convex combination of the µc’s). The number of points in level c homogeneous clusters is nc ∈ Ω( √ nNε). Even if all impure points are added to level c, the expected average effect for the points in level c is E[ITE | assigned to level c] = nimpureµ+ ncµc nimpure + nc = µc +O(ε ρ). (3) Part (2) of Theorem 1 follows from the next lemma after setting ε ∼ 1/n1/2d and ρ = 1. Lemma 6. Estimate µ̂c as the average ITE for all points assigned to level c (the cth order statistic of the optimal centers θ0, . . . , θℓ̂−1). Then µ̂c = µc +O(ε ρ + √ log n/n) with probability 1− o(1). Proof. Apply a Chernoff bound. We are taking an average of proportional to n points with expectation in (3). This average will approximate the expectation to within √ log n/n with probability 1− o(1). The details are very similar to the proof of Lemma 2, so we omit them. Part (3) of Theorem 1 now follows because all but the O(ερ) fraction of points in the impure clusters are assigned a correct expected effect. An additional fine-tuning leads to as much as 2× improvement in experiments. For each point, consider the ε-hypercube centered on that point. By a Chernoff bound, each of these n hypercubes has Θ( √ n) points, as in Lemma 1. All but a fraction O(ερ) of these are impure. Assign each point to the center θc that best matches its hypercube-“smoothed” ITE, giving new subpopulations Xc and corresponding subpopulation-effects µ̂c. This EM-style update can be iterated. Our simulations show the results for one E-M update. 4 DEMONSTRATION ON SYNTHETIC DATA We use a 2-dimensional synthetic experiment with three levels to demonstrate our pre-cluster and merge algorithm (PCM). Alternatives to pre-clustering include state-of-the-art methods that directly predict the effect such as meta-learners, and the Bayes optimal classifier based on ITEs. All methods used a base gradient boosting forest with 400 trees to estimate counterfactuals. The subpopulations in our experiment are shown in Figure 1, where black is effect-level 0, gray is level 1 and white is level 2. We present detailed results with n = 200K. Extensive results can be found in the appendix. Let us briefly describe the two existing benchmarks we will compare against. X-learner (12), is a meta-learner that estimates heterogeneous treatment effects directly from ITEs. For the outcome and effect models of X-Learner we use a base gradient boosting learner with 400 estimators (6) implemented in scikit-learn (16). For the propensity model we use logistic regression. Bayes Optimal uses the ITEs to reconstruct the subpopulations, given the number of levels and the ground-truth outcome distribution y(t, c) from Figure 1. The Bayes optimal classifier is: cBayes = 0 if ITE ≤ 0.5, cBayes = 1 if 0.5 < ITE ≤ 1.5, cBayes = 2 if 1.5 < ITE. We also use these thresholds to reconstruct subpopulations for X-learner’s predicted ITEs. Note: Neither the thresholds nor the number of levels are available in practice. We compare the benchmark subpopulations reconstructed with these thresholds to further showcase the power of our algorithm’s subpopulations, which outperform the competition without access to the forbidden information. Let ci be the level of subject i and ÎTEi the estimated ITE. The error is |µci − ÎTEi|, and we report the mean absolute error in the table below. Our algorithm predicts a level ĉi and uses its associated effect µ̂ĉi as ÎTEi. The other methods predict ITE directly for which we compute mean absolute error. As mentioned above, we also show the error for the optimally reconstructed subpopulations, which is not possible in practice, but included for comparison (red emphasizes not available in practice). n PCM (this work) X-Learner Bayes Optimal Subpopulations Predicted-ITE Subpopulations Raw-ITE 20K 0.35±0.39 3.04 ± 1.11 3.07 ± 2.41 4.57 ± 1.33 4.59 ± 3.49 200k 0.109±0.22 1.44 ± 0.83 1.50 ± 1.38 4.22 ± 1.28 4.24 ± 3.22 2M 0.036±0.13 0.34 ± 0.47 0.46 ± 0.56 4.01 ± 1.25 4.03 ± 3.05 Our algorithm is about 10× better than existing benchmarks even though we do not use the forbidden information (number of levels and optimal thresholds). It is also clear that X-learner is significantly better than Bayes optimal with just the raw ITEs. The next table shows subpopulation effects, again red indicates the use of forbidden information on the number of levels and optimal thresholds. The ground truth effects are µ0 = 0, µ1 = 1, µ2 = 2. n PCM (this work) X-Learner Bayes Optimal µ̂0 µ̂1 µ̂2 µ̂0 µ̂1 µ̂2 µ̂0 µ̂1 µ̂2 20K -0.21 0.91 2.07 -2.5 0.99 4.44 -3.94 1.00 5.99 200K 0.06 0.963 1.95 -1.16 1.01 2.87 -3.62 1.00 5.61 2M 0.04 0.996 1.993 -0.26 0.99 2.07 -3.41 1.00 5.41 Note that µ̂1 for X-learner and Bayes optimal are accurate, an artefact of knowing the optimal thresholds (not realizable in practice). A detailed comparison of our algorithm (PCM) with X-Learner and Bayes optimal subpopulations is shown in Figure 2. PCM clearly extracts the correct subpopulations. X-Learner and Bayes optimal, even given the number of levels and optimal thresholds, does not come visually close to PCM. Note, X-learner does display some structure but Bayes optimal on just the ITEs is a disaster. This is further illustrated in the ITE-histograms in the second row. PCM clearly shows three levels, where as X-learner ITEs and the raw ITEs suggest just one high variance level. The 3rd row shows the confusion matrices for subpopulation assignment. The red indicates use of information forbidden in practice, however we include it for comparison. The confusion matrix for PCM without forbidden information clearly dominates the other methods which use forbidden information. The high noise in the outcomes undermines the other methods, while PCM is robust. In high noise settings, direct use of the ITEs without some form of pre-clustering fails. Summary of experiments with synthetic data. Our algorithm accurately extracts subpopulations at different effect-levels. Analysis of individual treatment effects fails when there is noise. Our experiments show that practice follows the theory (more detailed experiments, including how cluster homogeneity converges to 1, are shown in the appendix). We note that there is a curse of dimensionality, namely the convergence is at a rate O(n−1/2d). 5 CONCLUSION Our work amplifies the realm of causal analysis to non-targeted trials where the treated population can consist of large subpopulations with different effects. Our algorithm uses a plug-and-play precluster and merge strategy that provably untangles the different effects. Experiments on synthetic data show a 10× or more improvement over existing HTE-benchmarks. In our analysis, we did not attempt to optimize the rate of convergence. Optimizing this rate could lead to improved algorithms. Our work allows causal effects analysis to be used in settings such as health interventions, where wide deployment over a mostly healthy population would mask the effect on the sick population. Our methods can seemlessly untangle the effects without knowledge of what sick and healthy mean. This line of algorithms can also help in identifying inequities between the subpopulations. One significant contribution is to reduce the untangling of subpopulation effects to a 1-dim clustering problem which we solve efficently. This approach may be of independent interest beyond causaleffect analysis. The effect is just a function that takes on ℓ levels. Our approach can be used to learn any function that takes on a finite number of levels. It could also be used to learn a piecewise approximation to an arbitrary continuous function on a compact set. A APPENDIX We provide more detailed experimental results, specifically results for different n (20K, 200K and 2M) and a comparison of different clustering methods in the pre-clustering phase: box-only, PCM (box plus 1 step of E-M improvement) and K-means. To calculate the counterfactual for treated subjects, we train a gradient boosted forest on the control population. B CONVERGENCE WITH n B.1 RECONSTRUCTED SUBPOPULATIONS We show subpopulation reconstructions for n ∈ {20K, 200K, 2M}. PCM (this work) X-Learner Bayes Optimal 20k 200k 2M Even with just 20K points in this very noisy setting, PCM is able to extract some meaningful subpopulation structure, while none of the other methods can. B.2 ITE HISTOGRAMS We show the ITE histograms for n ∈ {20K, 200K, 2M}. PCM (our work) X-Learner ITE 20k 1 0 1 2 3 ITE-PCM 0 50 100 10 5 0 5 10 ITE-XLEARNER 0 50 100 150 200 10 0 10 ITE 0 50 100 150 200 200k 0 1 2 ITE-PCM 0 500 1000 5.0 2.5 0.0 2.5 5.0 ITE-XLEARNER 0 1000 2000 3000 10 0 10 ITE 0 500 1000 1500 2000 2M 0 1 2 ITE-PCM 0 5000 10000 15000 1 0 1 2 3 ITE-XLEARNER 0 20000 40000 60000 80000 10 0 10 ITE 0 5000 10000 15000 20000 C DIFFERENT PRE-CLUSTERING METHODS We show the reconstructed subpopulations and effect errors for different pre-clustering methods. Box-clustering without any E-M step is also provably consistent. Our algorithm PCM uses boxclustering followed by an E-M step to improve the subpopulations using smoothed ITEs. We also show K-means pre-clustering, for which we did not prove any theoretical guarantees. Reconstruction. PCM (this work) BOX KMEANS 20k 200k 2M Histograms. PCM (our work) BOX KMEANS 20k 1 0 1 2 3 ITE-PCM 0 50 100 0 2 ATE 0 5 10 0 2 ATE 0 5 10 15 200k 0 1 2 ITE-PCM 0 500 1000 0 1 2 ATE 0 5 10 15 20 0 1 2 ATE 0 5 10 15 20 2M 0 1 2 ITE-PCM 0 5000 10000 15000 0 1 2 ATE 0 20 40 60 0 1 2 ATE 0 20 40 60 Error Table. n PCM (this work) BOX KMEANS 20K 0.35±0.39 0.50 ± 0.52 0.54 ± 0.50 200k 0.109±0.22 0.17 ± 0.35 0.20 ± 0.37 2M 0.036±0.13 0.078 ± 0.214 0.065 ± 0.20 D CLUSTER HOMOGENEITY To further show how practice reflects the theory, we plot average cluster homogeneity versus n. The cluster homogeneity is the fraction of points in a cluster that are from its majority level. Our entire methodology relies on the pre-clustering step producing a vast majority of homogeneous clusters. The rapid convergence to homogeneous clusters enables us to identify the correct subpopulations and the corresponding effects via pre-cluster and merge. 102 103 104 105 106 Number of Points 0.6 0.7 0.8 0.9 1.0 M ea n Hm g. Box
1. What is the main contribution of the paper regarding estimating treatment effect heterogeneity? 2. What are the strengths and weaknesses of the proposed algorithm, particularly in terms of its assumptions and limitations? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any concerns or questions regarding the paper's approach, such as the use of ATT instead of ATE or the discussion of the difference between ATT and EFF? 5. Is there any confusion regarding certain statements or parts of the paper, such as the role of unbiased counterfactual estimation or the relationship between the algorithm's steps and figures? 6. How does the reviewer view the paper's comparison to prior works, specifically the connection to sorted group average treatment effect, classification analysis, and endogenous stratification analysis?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The authors study the problem of estimating treatment effect heterogeneity when the subgroup indicators are unknown (e.g. sick versus healthy). In particular, the researcher knows that treatment effect heterogeneity only depends on the latent subgroup indicators, which are functions of observed covariates, but does not know how many subgroups there are and which subgroup each individual belongs to. The goal is to recover the correct number of subgroups, the effect for each subgroup, and to assign each individual to the correct subgroup. The authors propose an algorithm based on clustering and prove that it asymptotically achieves these three goals under seven assumptions. Strengths And Weaknesses See below Clarity, Quality, Novelty And Reproducibility Clarity *The algorithm intuition is very clear *Why is equation (2) about ATT instead of ATE? Relatedly, in the algorithm, isn’t the average of ITEs the ATE rather than the ATT? This was confusing *The discussion of the difference between ATT and EFF used diction that is atypical in causal inference. A more typical discussion would say that under the stated identifying assumptions, ATT and EFF do not coincide. “rendering ATT useless” is a bit of a jarring expression. *Some statements are too loose. “If unbiased counterfactual estimation is not possible, then any form of causal effect analysis is doomed.” Do the authors mean selection bias? Regularization bias? Either way, causal analysis is not doomed—there are entire literatures about how to correct for both biases. Please clarify which bias is being considered and fix the language. *A figure would be helpful to visualize the different steps of the algorithm. *I found this sentence to be confusing: “the counterfactual estimates use a predictor learned from the independent control population”. Is that the untreated subpopulation? Something else? Please clarify Quality *I crudely checked the results, but they seem rigorous *I was surprised that the results did not appeal to smoothness of h or compactness of X_c. Is this not required for inverting h? *A6 seems to be a strong assumption. Please discuss or formally sketch why this assumption is plausible. It would be nice to comment further on how the curse of dimensionality appears in A6. e^d N_{impure}<… seems to be the constraint. Is that correct? *Overall, the strongest assumption seems to be that the treatment effect heterogeneity only depends on C, an integer, and the analyst knows this fact. This is formally stated in A2, but the introduction somehow didn’t convey this point forcefully enough. Novelty *The authors introduce the language of effect and side effect, but the overall goal is closely tied to a known problem that goes by several names: sorted group average treatment effect, classification analysis, and endogenous stratification analysis. I would like the authors to compare their problem statement with the problem statement of sorted group average treatment effect in Chernozhukov-Demirer-Duflo-Fernandez Val (2018). This reference also provides a thorough discussion of related works that are relevant for this submission and worth citing. *While the problem is related to existing problems, the solution seems to me to be innovative and interesting.
ICLR
Title AMA: Asymptotic Midpoint Augmentation for Margin Balancing and Moderate Broadening Abstract Feature augmentation in neural networks is an effective regularization method to adjust the margin in feature space. However, a similar approach in terms of directly repositioning features, contrastive learning, has reported collapse problems of inter-class and intra-class features. The augmentation approaches are also related to the issues, but have been barely analyzed. In this paper, we show that feature augmentation methods are also affected by the collapse problems and address them by proposing a novel method to generate augmented features gradually approaching the midpoint of inter-class feature pairs, called asymptotic midpoint augmentation (AMA). The method induces two effects: 1) balancing the margin for all classes and 2) only moderately broadening the margin until it holds maximal confidence. We empirically analyze alignment and uniformity to show vulnerability to the problems in a toy task. Then, we validate its impacts in original, long-tailed, and coarse-to-fine transfer tasks on CIFAR-10 and CIFAR-100. To enhance generality, we additionally analyze its relation to a representative input-level augmentation such as Mixup. 1 INTRODUCTION Augmenting features in neural networks has been effective in regularization by handling margin in feature space( Verma et al. (2019)). The approach generates a feature, which indicates a hidden representation of a layer created from an input, and its confidence information from involved original features. A similar approach in the perspective of directly repositioning features, contrastive learning ( Chen et al. (2020) He et al. (2020)), learns features distant from a decision boundary by getting centroids of classes further away from each other, and gathering positive pairs closer, which decreases intra-class feature distance and increases inter-class feature distance, measured by alignment and uniformity, respectively. In the contrastive learning literature, two problems have been recently discussed: collapse of intra-class and inter-class features ( Li et al. (2022) Chen et al. (2022)). The first problem is reported in coarse-to-fine transfer learning( Chen et al. (2022)), where all features are closely located on the centroids of each class as the alignment excessively decreases. The second problem is introduced in Supervised Contrastive learning (SupCon) ( Khosla et al. (2020)), which uses labels to create positive and negative pairs. The method outperforms other self-supervised learning methods. However, SupCon causes unbalanced margins on long-tailed datasets by overwhelming numerical dominance of the head classes, and it decreases the image classification performance on them. Feature augmentation may also be affected by the collapse problems because of direct feature adjustment. However, the issues have not been deeply analyzed. In this paper, we show that feature augmentation also suffers from the problems by analyzing alignment and uniformity, and propose a novel feature augmentation method to generate augmented features gradually approaching a decision boundary, called Asymptotic Midpoint Augmentation (AMA). AMA has three parts: 1) generating a pool of augmented features by interpolating inter-class feature pairs and pseudo labeling, 2) class-unbiased random sampling, and 3) adaptive interpolation ratio control. The proposed method creates augmented features to make the margin balanced and moderately broad by asymptotically moving them to the midpoint, as shown in Figure 1. As a result, the method shows higher uniformity than before and sufficiently high alignment. In an experiment on a toy task, we validate the effect of collapses by using alignment and uniformity metrics for AMA and other feature relocation methods such as SupCon( Khosla et al. (2020)) and Manifold Mixup( Verma et al. (2019)). We empirically verify the impact of AMA in comparison with the feature augmentation methods in image classification tasks on long-tailed, coarse-to-fine transfer, and original data sets. Additionally, we also analyze the relation of AMA to a representative input-level augmentation method that enhances the different types of information, Mixup( Zhang et al. (2017)) In summary, our main contributions are four-fold: • we raise the inter-class and intra-class collapse issues in feature augmentation approaches and show their impacts by analyzing alignment and uniformity. • we propose a novel feature augmentation method, asymptotic midpoint augmentation, to address the problems by balancing and moderately broadening the margin in feature space. • we empirically analyze the effects and performance of AMA and other feature augmentation methods in image classification tasks on long-tailed datasets and coarse-to-fine transfer learning, which are sensitive to collapses. • we additionally confirm that it maintains performance in the original dataset to inhere uncertain portion of the problems, compare AMA with a representative input-level augmentation method, and analyze their relation. 2 BACKGROUND Intra-class collapse Contrastive loss leads the features of positive pairs to be closed to invariant on the noise factor. In contrastive learning, the encoder is forced to ensure that similar samples must be placed at a similar location in the feature space. However, the attraction between positive pairs makes features gather at one point. This phenomenon limits the expressiveness of the model, and it is especially critical for some tasks such as coarse-to-fin transfer learning. More specifically, if a model is pre-trained by coarse-grained labels and then fine-tuned by fine-grained labels, the model would likely not classify fine-grained samples due to the collapsed features. Especially, features in the same class are prone to collapse on the centroids of the class in supervised contrastive learning. We called this problem as intra-class collapse. To measure the intra-class collapse, intraclass alignment has been proposed, which represents the closeness of positive pairs ( Wang & Isola (2020) Li et al. (2022)). The intra-class alignment can be measured by following: A = 1 𝐶 ∑𝐶 𝑖=1 1 |F𝑖 |2 ∑ v 𝑗 ,v𝑘 ∈F𝑖 ∥v 𝑗 − v𝑘 ∥2 (1) , where 𝐶 is the number of classes, v is a feature vector, and F𝑖 is the set of features from class 𝑖. ∥·∥2 means L2-norm. Inter-class collapse Common contrastive learning methods achieve high performance thanks to the property that centroids of the class get further away through repulsion between negative samples. However, supervised contrastive learning tends to make collapse between features in different classes when the dataset is imbalanced, such as long-tailed datasets. More specifically, the model naturally concentrates on getting a large distance between head classes to minimize the loss. For this reason, the contrastive loss is not evenly weighted on all classes. In this situation, features in tail classes would be collapsed each other. We called this collapse as inter-class collapse, and it prevents the model from learning regular simplex of features, which is a crucial factor when training on imbalanced datasets in contrastive learning. The inter-class collapse can be measured by inter-class and neighborhood uniformity, which are metrics that favor the uniform distribution of representations on the unit hypersphere ( Wang & Isola (2020) Li et al. (2022)). The inter-class uniformity measures the pair-wise distance between different classes, and the neighborhood uniformity inspects the convergence of tail classes. These two kinds of metrics can be measured by following U and U𝑘 , respectively: U = 1 𝐶 (𝐶−1) ∑𝐶 𝑖=1 ∑𝐶 𝑗=1, 𝑗≠𝑖 ∥v̄𝑖 − v̄ 𝑗 ∥2 (2) U𝑘 = 1𝐶𝑘 ∑𝐶 𝑖=1 min𝑗1 , · · · , 𝑗𝑘 (∑𝑘𝑙=1∥v̄𝑖 − v̄ 𝑗𝑙 ∥2) (3) , where v̄𝑖 is the center of samples from class 𝑖 on the hypersphere: v̄𝑖 = ∑ v 𝑗 ∈F𝑖 v 𝑗 ∥∑v 𝑗 ∈F𝑖 v 𝑗 ∥ 2. In this paper, we do not normalize the center of samples by their norm for a fair comparison with the original method and feature augmentation methods, which do not purpose to learning representations on the hypersphere. 3 ASYMPTOTIC MIDPOINT AUGMENTATION In this section, we first present our motivation based on preliminary experiments about alignment and uniformity for augmentation and contrastive learning methods. Then, we introduce asymptotic midpoint augmentations (AMA) and analyze its effects to feature distribution and decision boundaries. 3.1 MOTIVATION Experimental Setting To quantitatively measure the intra-class and inter-class collapses, we inspect intra-class alignment, inter-class uniformity, and top-3 neighborhood uniformity in an image classification task on long-tailed CIFAR-100 where the imbalance factor was set to 100. We analyzed those metrics by Eq. 1, 2, and 3. The thing to note here is that we did not normalize the uniformity by class centers for a fair comparison. The experimental settings here are the same as Section 4.3. Collapse Problems Are Important in Feature Augmentation In Table 1, the evidence of collapses and their unignorable impact are observed. First of all, augmentation methods show higher intra-class alignment than SupCon. Optimal intra-class alignment is uncertain and varies by many factors, but SupCon is known as having excessively low intra-class alignment when intra-class collapse occurs. Therefore, it is reasonable that the augmentation methods are alleviating the collapse effect. According to the background, inter-class collapse reduces inter-class uniformity and neighborhood uniformity, and the augmentation methods gradually get higher values in more recent methods. The two observations show the possibility of resolving collapses via feature augmentation, and the corresponding significant increase in accuracy implies that the impact of the collapses can not be ignored. Additionally, Mixup is a data augmentation method on input space, but it also improves the measures, which shows the difference between the augmentation approach to contrastive learning. We introduce this extended experiment in Section 4.6. 3.2 PROPOSED METHOD Notations Let D = {(x𝑖 , 𝑐𝑖) |1 ≤ 𝑖 ≤ 𝑛, 𝑖 ∈ N} be the set of pairs of an input vector and its label where x𝑖 ∈ R𝑑 and 𝑐𝑖 ∈ 𝐶 for the class index set 𝐶 and the pair index 𝑖. We define y𝑖 = [𝑦1, 𝑦2, ..., 𝑦 |𝐶 |] ∈ R |𝐶 | as one-hot encoding vector for 𝑐𝑖 , where 𝑦𝑐𝑖 = 1. The feature vector of 𝑖-th input sample x𝑖 , is notated as z𝑖 ∈ R |𝐶 | . The confidence p comes from 𝜎(z), where 𝜎(·) is a function that normalizes an input vector into a range that leads to probabilistic interpretations, similarly to softmax. In this paper, we used softmax function for 𝜎(·). Θ and Φ represent the parameters of the networks. Interpolation-Based Feature Generation and Pseudo Labeling In AMA, augmented features and labels are created as z (𝑖, 𝑗) = 𝛼 · z𝑖 + (1 − 𝛼) · z 𝑗 𝑐 (𝑖, 𝑗) = { 𝑐𝑖 , if 𝛼 ≥ 0.5 𝑐 𝑗 , if 𝛼 < 0.5 (4) , where z (𝑖, 𝑗) is an augmented feature generated via interpolation of z𝑖 and z 𝑗 selected from different classes, and the pseudo label is 𝑐 (𝑖, 𝑗) . This process occurs in the feature space, and the pseudo labels are determined by controlling a parameter 𝛼 for asymptotically moving them close to the decision boundary. In different with other interpolation-based methods, the labels are definitely determined as one side. Class-Unbiased Random Sampling We consider how to sample original features for interpolation from two different classes to balance pair-wise margins between them. For this purpose, original features are randomly selected from probabilistic distribution in every mini-batches. Let D𝐵 = {(x𝐵,𝑖 , 𝑐𝐵,𝑖) |1 ≤ 𝑖 ≤ 𝑚, 𝑖 ∈ N} be the pairs of input samples and labels in the mini-batch, where the mini-batch size is 𝑚. Then, the probability of selecting (x𝑖 , 𝑐𝑖) from D𝐵 for interpolation is illustrated in Eq. 5: P(x𝐵,𝑖) = 1 𝐶𝐵 · 1 𝑁𝑐𝑖 (5) , where 𝐶𝐵 is the number of classes in the mini-batch and 𝑁𝑐𝑖 is the number of samples of 𝑐𝑖-th class in the mini-batch. This sampling method allows the decision boundary to be placed in the middle of two engaged classes while maximizing the margin. Asymptotic move of Augmented Features Confidence is an important factor in estimating the decision boundary. However, it is unreliable to use the pseudo labels as ground truth in early training because neural networks are prone to predict wrong. To reduce this risk, we propose a scheduler that relies on the training accuracy to update 𝛼 more sensitively, as illustrated in Eq. 6. 𝛼 = 𝑓 (𝑣𝑎𝑐𝑐) = 𝑒−𝛽 ·𝑣𝑎𝑐𝑐 (6) , where 𝑁 is the number of epochs and 𝑣𝑎𝑐𝑐 ∈ [0, 1] means the real value of training accuracy at each epoch. 𝛽 is a hyperparameter to decide how 𝛼 decreases as the training accuracy. We set 𝛽 as 0.67 where 𝛼 exponentially decreased from 1.0 to about 0.5, and empirically figured out the performance consistently shows best when 𝛽 = 0.67 except coarse-to-fine transfer learning environment. Algorithm 1 Example of Applying AMA to Training a Neural Network for Classification Input: model parameter Θ and Φ, cross-entropy loss LCE , AMA loss LAMA , mini-batch size 𝑀 , # mini-batches 𝑁 , balancing parameter 𝛼, learning rate 𝜂 Output: balanced and moderately broad margin 1: D← a set of pairs of input samples and labels 2: 𝑓Θ ← encoder, which parameters are Θ 3: 𝑔Φ ← classifier, which parameters are Φ 4: 𝛼← 1.0 5: for epoch = 1, 2, . . . , 𝑇 do 6: for 𝑖 = 1, 2, . . . , 𝑁 do 7: D𝐵 ← a set of pairs of input samples and labels in the 𝑖-th mini-batch 8: X← {x𝐵,1 ,x𝐵,2 , . . . ,x𝐵,𝑀 } 9: Z← 𝑓Θ (X) 10: Z𝑠 ← a set of original features selected via class-unbiased random sampling by Eq. 5 11: Generate augmented features Z(·,·) and labels c(·,·) from Z𝑠 by Eq. 4 12: LCE ← cross-entropy loss from Z by Eq. 7 13: LAMA ← AMA loss from Z(·,·) by Eq. 8 14: L ← LCE + LAMA 15: Θ← Θ − 𝜂∇ΘL 16: Φ← Φ − 𝜂∇ΦL 17: Update 𝛼 by Eq. 6 18: end for 19: end for Training Loss for Augmented Features AMA uses cross-entropy for the augmented features as original features and integrated with original cross-entropy loss as follows. LCE = ∑︁ z∈Z 𝐶∑︁ 𝑘=1 −𝑦𝑘 log 𝑝𝑘 , where p = 𝜎(z) (7) LAMA = ∑︁ z (𝑖, 𝑗) ∈Z(·,·) 𝐶∑︁ 𝑘=1 −𝑦 (𝑖, 𝑗) 𝑘 log 𝑝 (𝑖, 𝑗) 𝑘 , where p(𝑖, 𝑗) = 𝜎(z (𝑖, 𝑗) ) (8) where Z and Z(𝑖, 𝑗) are the set of features and selected augmented features, respectively, and 𝑝𝑘 is the probability for the 𝑘-th class. An example of integration with a usual classification is shown in Algorithm 1. 3.3 EFFECT ANALYSIS We explain the margin-balancing and moderate margin-broadening effects of AMA and empirically figure out the effects of a simple classification task on a long-tailed toy dataset via qualitative and quantitative analysis. Margin Balancing AMA forces a decision boundary to locate near the midpoint of inter-class features, because the optimum of AMA loss is obtained when the boundary passes the midpoint for the following reasons: 1) class-unbiased random sampling selects the same number of augmented features for every class, 2) the expected distance of two augmented features to their midpoint is equal, and 3) the sum of their confidences determined by the distance 𝑑 is 2𝜎(0.5 + 𝑑) that has the maximum at the midpoint (𝑑 = 0). Using the guidance to the midpoint repeatedly over many updates, the asymptotic move of the augmented features toward the midpoint reduces the possibility of locating the boundary at the intermediate points between the original and augmented features. Because of this convergence to midpoint by AMA loss, its mixture with other losses is still adjusted to balance margin. Moderate Margin Broadening AMA broadens margin than original networks. Generally, loss to maximize confidence increases margin in a simple relation of a feature and a decision boundary. AMA adds the gradient of augmented features to the guidance in the same direction because the features are interpolations of original features and have the same label. On the other side, the original features stop being further away from the boundary after obtaining maximal confidence. Because of nearly zero gradients at the state, the distance of intra-class features to their centroids is moderately preserved without excessive converging pressure. Experimental Setting We randomly generated [1000, 500, 100, 10] training samples and [200, 200, 200, 200] test samples around (-3, 3), (3, 3), (3, -3), and (-3, -3) for four different classes in R2, respectively. All points were randomly sampled from the Gaussian distribution, where mean 𝐷𝑚𝑎𝑥 .) and variance are set to 0 and 1, respectively. We used a 4-layer neural network, which has 128-64-2 hidden units in each layer for baselines and AMA. We set the optimizer as SGD at the momentum of 0.9 and weight decay of 5e-4, and the initial learning rate as 0.1. We used 16 mini-batches, and the total number of epochs was 100. In SupCon, we used the first three layers as an encoder and trained the encoder while maintaining the same settings except for epochs set to 600. Then, the last hidden layer was used as a classifier to predict labels with the same settings. To compare the margin, we visualized feature vectors of input samples as points and their confidences as a heat map on 2-dimensional space. Moreover, we analyzed various distances to quantitatively compare how they affect the margin. Result and Analysis In Figure 2a, AMA learns more balanced margin than the original and SupCon methods. It is shown by the critically narrow area for tail classes (label 2 and 3) compared to the area for head classes (label 0 and 1). Especially, SupCon assigns an extremely large area to the head classes while AMA maintains a relatively similar distance from all boundaries. To investigate the effect of moderate margin-broadening, we quantitatively analyze original, SupCon, and AMA, as shown in Figure 2b. 𝐷𝑟𝑒𝑙𝑎𝑡𝑖𝑣𝑒 indicates the relative margin of inter-class features compared to the total size of feature distribution. AMA shows the best 𝐷𝑟𝑒𝑙𝑎𝑡𝑖𝑣𝑒, which is helpful in increasing inter-class uniformity and neighborhood uniformity while maintaining low 𝐷𝑚𝑎𝑥 . SupCon improves inter-class uniformity by increasing 𝐷𝑐𝑒𝑛𝑡𝑟𝑜𝑖𝑑 , but 𝐷𝑚𝑎𝑥 increases more than about 7× of AMA. The observation implies that AMA only moderately broadens the margin without an excessive expansion of feature distribution as SupCon. 4 EXPERIMENTS We selected two methods as baselines to compare with AMA. SupCon shows our target problem well, and Manifold Mixup is a representative method of feature augmentation. In the followings, all experiments have been run on three different random seeds, and their performances are represented as the mean mean and standard deviation std. In AMA, 𝛽 was set to 0.67 as default, and we only annotate when it has a different value. 4.1 COMMON SETTINGS We conducted experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, which are generally used in image classification benchmarks. Also, we used VGG11( Simonyan & Zisserman (2014)), ResNet32, ResNet50( He et al. (2016)) and DenseNet-BC with 12 growth rate( Huang et al. (2017)). SupCon and Manifold Mixup used the same environmental settings with the following explanation for each task. In Manifold Mixup, we interpolated features only right before the classifier for a fair comparison. 4.2 COARSE-TO-FINE TRANSFER LEARNING TASK Experimental Setting We conducted coarse-to-fine transfer learning on CIFAR-10 and CIFAR-100. We first trained the ResNet50 with a coarse-grained dataset and fine-tuned the linear classifier with a fine-grained dataset. We used 128 minibatches and the SGD at the momentum of 0.9 and weight decay of 5e-4. For CIFAR-100, we set the initial learning rate as 0.1 and divided it by five at the 60th, 120th, and 160th epochs, where the total number of epochs is 200. We composed the coarse-grained dataset by splitting the original dataset into a super-class of them. The fine-grained dataset is the same as the original dataset. For CIFAR-10, we followed the hyperparameter and coarse-to-fine dataset settings in Chen et al. (2022). Result and Analysis As shown in Table 2, AMA achieved the second-best test accuracy, while SupCon suffers intra-class collapse noticed by low accuracy. In a similar context, Manifold Mixup and AMA also have intra-class collapse by showing lower accuracy than the original method. However, AMA achieves better than SupCon and Manifold Mixup, and it means that AMA alleviates intra-class collapse in coarse-to-fine transfer learning. 4.3 LONG-TAILED TASK Experimental Setting We used ResNet32, 256 mini-batches, the SGD at the momentum of 0.9 and weight decay of 5e-4, and the number of epochs is 400. We set the initial learning rate as 0.0 and warmed up for ten epochs by 0.015. After that, we divided the learning rate by ten at 360th and 380th epochs. The more specific settings are illustrated in Cui et al. (2021). Result and Analysis As shown in Table 3, AMA attains the best performance except for the imbalance factor set as 50 and 10 in CIFAR-10-LT. Whereas, SupCon shows the worst performance in a high imbalance factor, which means SupCon has inter-class collapse in the long-tailed datasets while AMA learns balanced margin. For this reason, AMA achieved the highest performance by alleviating inter-class collapse between tail classes. 4.4 ORIGINAL IMAGE CLASSIFICATION BENCHMARKS Experimental Setting We conducted image classification experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet. For CIFAR-10, we set the initial learning rate as 0.05 and divided the learning rate by two at every 30 epochs among the total of 300 epochs for all networks. For CIFAR-100, we used the hyperparameter same as Section 4.2 for all networks. For Tiny-ImageNet on VGG11 and ResNet50, we used 256 mini-batches, the SGD at a momentum of 0.9 without weight decay, and the number of epochs is 200. We set the initial learning rate as 0.1 and multiplied it by 0.9 at every 20 epochs. For DenseNet-BC (𝑘 = 12) on Tiny-ImageNet, we used 64 mini-batches, the SGD at a momentum of 0.9 without weight decay, and the number of epochs is 300. We set the initial learning rate as 0.1 and divided it by ten at 150 and 225 epochs. Result and Analysis As shown in Table 4, AMA achieved competitive or even high performance with other representation augmentation based-models. Specifically for VGG11, AMA retained the highest performance overall. It implies AMA sustains proper alignment and high uniformity without interruption for representation learning. 4.5 ABLATION STUDY We conducted the ablation study to clarify the effects of all parts in AMA: interpolation, classunbiased random sampling and asymptotic move of augmented features. Table 5 shows the effect of components in AMA. In this experiment, we did experiments in coarse-to-fine transfer on CIFAR-100 and in the image classification on CIFAR-100-LT (imbalance factor: 100) with the same settings each. In coarse-to-fine transfer learning, AMA without CR shows the second-best performance. It implies the asymptotic move of augmented features is more stable than simply locating augmented features at the midpoint since the beginning. Class-unbiased random sampling exhibits its impact in the long-tailed dataset. By mitigating unbiased augmented features, the model could learn more balanced margins. Overall, using these two components together shows the best performance proving their synergy in AMA. 4.6 ANALYSIS WITH MIXUP In our motivation experiments, we found that two collapse problems also occur in the data augmentation method as Mixup( Zhang et al. (2017)). For the exploration of AMA to data augmentation approach, we first apply AMA to Mixup and figured out that AMA is helpful to alleviate the collapses in longtailed and coarse-to-fine transfer learning tasks. In this analysis, experimental settings are the same as Sections 4.1, 4.2, and 4.3. Results and Analysis In both experiments, Mixup causes performance degradation overall. However, the mixture of AMA and Mixup shows better performance than using only Mixup and almost recovers the original performance. As a result, feature augmentation helps Mixup alleviate intraclass and inter-class collapses. 5 RELATED WORK 5.1 AUGMENTATION Data augmentation has been one of the effective regularization techniques( Zhang et al. (2017) Shorten & Khoshgoftaar (2019) DeVries & Taylor (2017) Cubuk et al. (2018) Zhong et al. (2020) Moreno-Barea et al. (2018)). Mixup( Zhang et al. (2017)), a generally used approach among data augmentations, interpolates each pair of input samples and labels in the input space. Using this interpolation, it is possible for models to improve their inductive bias. In other streams, data augmentation has been applied to features in feature space, called feature augmentation Verma et al. (2019) Li et al. (2021) Kuo et al. (2020) Lee et al. (2021) Wang et al. (2021)). In Manifold Mixup( Verma et al. (2019)), models get a smoother decision boundary than before, and it results in the improvement of robustness. However, they have not focused on margin, which is an important component to make decision boundary robust, while our proposed method creates augmented features in the feature space and adjusts the augmentation to make the margin balanced and moderately wide. 5.2 CONTRASTIVE LEARNING Contrastive learning achieved state-of-the-art performance in image classification tasks, which is an example of focusing on the margin( Chen et al. (2020) He et al. (2020) Caron et al. (2020) Li et al. (2020) Gutmann & Hyvärinen (2010) Koch et al. (2015) Khosla et al. (2020)). Contrastive learning attracts positive samples and repulses negative samples from the anchor. In supervised approaches, SupCon( Khosla et al. (2020)) uses label information to choose positive pairs and negative pairs. SupCon can effectively get considerable uniformity between inter-class and minor alignment between intra-class. This property leads to ideal representations, which have a large margin between other classes. In spite of these advantages, Supcon has an unavoidable problem of collapse( Jing et al. (2021)) because each sample converged toward the class centroid. This collapse makes features indistinguishable from each other and can lead to poor performance in coarse-to-fine transfer learning( Chen et al. (2022)). In addition, prior works have focused on relatively low performance in long-tailed tasks when using SupCon( Zhu et al. (2022) Li et al. (2022)). In the long-tailed tasks, SupCon leads to overwhelming concentration on head classes, and it encourages the collapse between tail classes. To solve this problem, BCL( Zhu et al. (2022)) used class-average and classcomplement with SupCon loss and TSC( Li et al. (2022)) forced class centroids to form a regular simplex on the hypersphere. In contrast, we learn balanced and moderately broad margin while avoiding collapse by creating augmented features as asymptotically moving to the midpoint. 6 CONCLUSION In this paper, we raised the two collapse problems of feature augmentation, which are recently discussed in contrastive learning literature. We found that the problems were still important in state-of-the-art feature augmentation method as Manifold Mixup by analyzing alignment and uniformity used as indicators of the collapse problems. To address the collapse problems, we proposed Asymptotic Midpoint Augmentation to generate effective features via 1) interpolation of features with pseudo labeling, 2) class-unbiased random sampling of augmented features, and 3) their asymptotic move. The method showed the two effects of margin balancing and moderate-broadening, and their impact on the collapse problems in quantitative and qualitative analysis of a toy long-tailed classification task. In more practical long-tailed and coarse-to-fine transfer learning experiments on CIFAR-10 and CIFAR-100 datasets, which suffered from inter-class and intra-class collapse respectively, AMA significantly alleviated the performance compared to SupCon and Manifold Mixup. Ablation study and relation to data augmentation method as Mixup are also analyzed for validating their deep and broader impact. A limit is that AMA may require additional tuning of hyperparameter 𝛽 to obtain the best performance because of different intensities of the collapse problems by tasks. ETHICS STATEMENT In this paragraph, we address potential concerns below: • studies that involve human subjects: N/A • practices to data set releases: CIFAR-10, CIFAR-100, Tiny-ImageNet, CIFAR-10-LT, CIFAR-100-LT.(See Sections 4.2, 4.3, and 4.4) • potentially harmful insights, methodologies and applications: N/A • potential conflicts of interest and sponsorship, discrimination/bias/fairness concerns, pri- vacy and security issues, legal compliance, and research integrity issues: N/A REPRODUCIBILITY STATEMENT In this paragraph, we capsulize contents for reproducing our results. • Experiment settings 1. A Simple Classification Task on Long-Tailed Toy Dataset: Section 3.3 2. Coarse-to-Fine Transfer Learning: Section4.2 3. Image Classification on Long-Tailed Dataset: Section4.3 4. Image Classification on Classic Dataset: Section4.4 • Code Description in Supplementary material 1. Experimental Details 2. Requirements 3. Training and Evaluation (a) How to run Coarse-to-Fine Transfer Learning (b) How to run Image Classification on Long-Tailed Dataset (c) How to run Image Classification on Classic Dataset 4. Reference
1. What is the main contribution of the paper, and how does it address the problem of intra-class and inter-class collapsing? 2. What are the strengths and weaknesses of the proposed method, particularly in terms of its ability to outperform other self-supervised learning methods? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. What are some of the confusing parts of the paper, and how could they be improved for better understanding? 5. How well is the proposed method motivated, and how clear is the intuition behind it? 6. Are there any typos or errors in the paper that need to be addressed?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a new augmentation method called Asymptotic Midpoint Augmentation (AMA). It is proposed to address the intra-class and inter-class collapsing. Experiment results across different datasets and tasks are reported. Strengths And Weaknesses Pros: The authors offer the code for reproducing. Cons: Some parts of it are very confusing. For example, in the abstract and introduction, many concepts lack definitions, which are very unfriendly (e.g. margin in feature space, alignment, uniformity, etc.). Some sentences are very wordy and confusing (e.g. "A similar approach in the perspective of directly repositioning features, contrastive learning (Chen et al. (2020) He et al. (2020)), learns features distant from a decision boundary by getting centroids of classes further away from each other, and gathering positive pairs closer, which decreases intra-class feature distance and increases inter-class feature distance, measured by alignment and uniformity, respectively.") The logic is also blurry. For "The method outperforms other self-supervised learning methods. However, SupCon ... performance on them.", is it talking about the inter-class collapsing or the unbalanced margins? Also, for "we show that feature augmentation also suffers from the problems by analyzing alignment and uniformity", what are "the problems"? "intra-class and inter-class collapsing" or the "unbalanced margins"? The proposed method is not well motivated. There is no intuition of why "the problems" would happen for augmentations and how can the proposed methods solve them. In the "Intra-class collapse" part of Section 2, the statement "More specifically, if a model is pre-trained by coarse-grained labels and then fine-tuned by fine-grained labels, the model would likely not classify fine-grained samples due to the collapsed features." is weird given the context is contrastive learning, which is unsupervised. The settings are very blurry. What are the settings of Orig., Mixup, and Manifold Mixup in Table 1? How can this mysterious Orig. surpass TSC, which is known as a SOTA method, in Table 3 for about 4 settings? In Table 4, are you reporting an error rate? In Section 3.2, what is the meaning of " For this purpose, original features are randomly selected from probabilistic distribution in every mini-batches."? How could this achieve a balanced sampling? In the last paragraph of Section 3.1, the two conclusions are "it is reasonable that the augmentation methods are alleviating the collapse effect." and "The two observations show the possibility of resolving collapses via feature augmentation". How could this lead to "Collapse Problems Are Important in Feature Augmentation"? I thought you are going to show that augmentation methods are suffering from similar problems according to the abstract and introduction. If you are NOT trying to show that they are suffering from similar problems, Why do you want to improve augmentation methods? AMA only introduces balanced sampling and curriculum learning to the previous methods, which is not novel at all. Typos. "some tasks such as coarse-to-fin transfer learning" should be "some tasks such as coarse-to-fine transfer learning". Clarity, Quality, Novelty And Reproducibility Clarity: Bad. (As discussed in cons.) Quality: Bad. (As discussed in cons.) Novelty: Bad. (As discussed in cons.) Reproducibility: Good. (The author releases the code.)
ICLR
Title AMA: Asymptotic Midpoint Augmentation for Margin Balancing and Moderate Broadening Abstract Feature augmentation in neural networks is an effective regularization method to adjust the margin in feature space. However, a similar approach in terms of directly repositioning features, contrastive learning, has reported collapse problems of inter-class and intra-class features. The augmentation approaches are also related to the issues, but have been barely analyzed. In this paper, we show that feature augmentation methods are also affected by the collapse problems and address them by proposing a novel method to generate augmented features gradually approaching the midpoint of inter-class feature pairs, called asymptotic midpoint augmentation (AMA). The method induces two effects: 1) balancing the margin for all classes and 2) only moderately broadening the margin until it holds maximal confidence. We empirically analyze alignment and uniformity to show vulnerability to the problems in a toy task. Then, we validate its impacts in original, long-tailed, and coarse-to-fine transfer tasks on CIFAR-10 and CIFAR-100. To enhance generality, we additionally analyze its relation to a representative input-level augmentation such as Mixup. 1 INTRODUCTION Augmenting features in neural networks has been effective in regularization by handling margin in feature space( Verma et al. (2019)). The approach generates a feature, which indicates a hidden representation of a layer created from an input, and its confidence information from involved original features. A similar approach in the perspective of directly repositioning features, contrastive learning ( Chen et al. (2020) He et al. (2020)), learns features distant from a decision boundary by getting centroids of classes further away from each other, and gathering positive pairs closer, which decreases intra-class feature distance and increases inter-class feature distance, measured by alignment and uniformity, respectively. In the contrastive learning literature, two problems have been recently discussed: collapse of intra-class and inter-class features ( Li et al. (2022) Chen et al. (2022)). The first problem is reported in coarse-to-fine transfer learning( Chen et al. (2022)), where all features are closely located on the centroids of each class as the alignment excessively decreases. The second problem is introduced in Supervised Contrastive learning (SupCon) ( Khosla et al. (2020)), which uses labels to create positive and negative pairs. The method outperforms other self-supervised learning methods. However, SupCon causes unbalanced margins on long-tailed datasets by overwhelming numerical dominance of the head classes, and it decreases the image classification performance on them. Feature augmentation may also be affected by the collapse problems because of direct feature adjustment. However, the issues have not been deeply analyzed. In this paper, we show that feature augmentation also suffers from the problems by analyzing alignment and uniformity, and propose a novel feature augmentation method to generate augmented features gradually approaching a decision boundary, called Asymptotic Midpoint Augmentation (AMA). AMA has three parts: 1) generating a pool of augmented features by interpolating inter-class feature pairs and pseudo labeling, 2) class-unbiased random sampling, and 3) adaptive interpolation ratio control. The proposed method creates augmented features to make the margin balanced and moderately broad by asymptotically moving them to the midpoint, as shown in Figure 1. As a result, the method shows higher uniformity than before and sufficiently high alignment. In an experiment on a toy task, we validate the effect of collapses by using alignment and uniformity metrics for AMA and other feature relocation methods such as SupCon( Khosla et al. (2020)) and Manifold Mixup( Verma et al. (2019)). We empirically verify the impact of AMA in comparison with the feature augmentation methods in image classification tasks on long-tailed, coarse-to-fine transfer, and original data sets. Additionally, we also analyze the relation of AMA to a representative input-level augmentation method that enhances the different types of information, Mixup( Zhang et al. (2017)) In summary, our main contributions are four-fold: • we raise the inter-class and intra-class collapse issues in feature augmentation approaches and show their impacts by analyzing alignment and uniformity. • we propose a novel feature augmentation method, asymptotic midpoint augmentation, to address the problems by balancing and moderately broadening the margin in feature space. • we empirically analyze the effects and performance of AMA and other feature augmentation methods in image classification tasks on long-tailed datasets and coarse-to-fine transfer learning, which are sensitive to collapses. • we additionally confirm that it maintains performance in the original dataset to inhere uncertain portion of the problems, compare AMA with a representative input-level augmentation method, and analyze their relation. 2 BACKGROUND Intra-class collapse Contrastive loss leads the features of positive pairs to be closed to invariant on the noise factor. In contrastive learning, the encoder is forced to ensure that similar samples must be placed at a similar location in the feature space. However, the attraction between positive pairs makes features gather at one point. This phenomenon limits the expressiveness of the model, and it is especially critical for some tasks such as coarse-to-fin transfer learning. More specifically, if a model is pre-trained by coarse-grained labels and then fine-tuned by fine-grained labels, the model would likely not classify fine-grained samples due to the collapsed features. Especially, features in the same class are prone to collapse on the centroids of the class in supervised contrastive learning. We called this problem as intra-class collapse. To measure the intra-class collapse, intraclass alignment has been proposed, which represents the closeness of positive pairs ( Wang & Isola (2020) Li et al. (2022)). The intra-class alignment can be measured by following: A = 1 𝐶 ∑𝐶 𝑖=1 1 |F𝑖 |2 ∑ v 𝑗 ,v𝑘 ∈F𝑖 ∥v 𝑗 − v𝑘 ∥2 (1) , where 𝐶 is the number of classes, v is a feature vector, and F𝑖 is the set of features from class 𝑖. ∥·∥2 means L2-norm. Inter-class collapse Common contrastive learning methods achieve high performance thanks to the property that centroids of the class get further away through repulsion between negative samples. However, supervised contrastive learning tends to make collapse between features in different classes when the dataset is imbalanced, such as long-tailed datasets. More specifically, the model naturally concentrates on getting a large distance between head classes to minimize the loss. For this reason, the contrastive loss is not evenly weighted on all classes. In this situation, features in tail classes would be collapsed each other. We called this collapse as inter-class collapse, and it prevents the model from learning regular simplex of features, which is a crucial factor when training on imbalanced datasets in contrastive learning. The inter-class collapse can be measured by inter-class and neighborhood uniformity, which are metrics that favor the uniform distribution of representations on the unit hypersphere ( Wang & Isola (2020) Li et al. (2022)). The inter-class uniformity measures the pair-wise distance between different classes, and the neighborhood uniformity inspects the convergence of tail classes. These two kinds of metrics can be measured by following U and U𝑘 , respectively: U = 1 𝐶 (𝐶−1) ∑𝐶 𝑖=1 ∑𝐶 𝑗=1, 𝑗≠𝑖 ∥v̄𝑖 − v̄ 𝑗 ∥2 (2) U𝑘 = 1𝐶𝑘 ∑𝐶 𝑖=1 min𝑗1 , · · · , 𝑗𝑘 (∑𝑘𝑙=1∥v̄𝑖 − v̄ 𝑗𝑙 ∥2) (3) , where v̄𝑖 is the center of samples from class 𝑖 on the hypersphere: v̄𝑖 = ∑ v 𝑗 ∈F𝑖 v 𝑗 ∥∑v 𝑗 ∈F𝑖 v 𝑗 ∥ 2. In this paper, we do not normalize the center of samples by their norm for a fair comparison with the original method and feature augmentation methods, which do not purpose to learning representations on the hypersphere. 3 ASYMPTOTIC MIDPOINT AUGMENTATION In this section, we first present our motivation based on preliminary experiments about alignment and uniformity for augmentation and contrastive learning methods. Then, we introduce asymptotic midpoint augmentations (AMA) and analyze its effects to feature distribution and decision boundaries. 3.1 MOTIVATION Experimental Setting To quantitatively measure the intra-class and inter-class collapses, we inspect intra-class alignment, inter-class uniformity, and top-3 neighborhood uniformity in an image classification task on long-tailed CIFAR-100 where the imbalance factor was set to 100. We analyzed those metrics by Eq. 1, 2, and 3. The thing to note here is that we did not normalize the uniformity by class centers for a fair comparison. The experimental settings here are the same as Section 4.3. Collapse Problems Are Important in Feature Augmentation In Table 1, the evidence of collapses and their unignorable impact are observed. First of all, augmentation methods show higher intra-class alignment than SupCon. Optimal intra-class alignment is uncertain and varies by many factors, but SupCon is known as having excessively low intra-class alignment when intra-class collapse occurs. Therefore, it is reasonable that the augmentation methods are alleviating the collapse effect. According to the background, inter-class collapse reduces inter-class uniformity and neighborhood uniformity, and the augmentation methods gradually get higher values in more recent methods. The two observations show the possibility of resolving collapses via feature augmentation, and the corresponding significant increase in accuracy implies that the impact of the collapses can not be ignored. Additionally, Mixup is a data augmentation method on input space, but it also improves the measures, which shows the difference between the augmentation approach to contrastive learning. We introduce this extended experiment in Section 4.6. 3.2 PROPOSED METHOD Notations Let D = {(x𝑖 , 𝑐𝑖) |1 ≤ 𝑖 ≤ 𝑛, 𝑖 ∈ N} be the set of pairs of an input vector and its label where x𝑖 ∈ R𝑑 and 𝑐𝑖 ∈ 𝐶 for the class index set 𝐶 and the pair index 𝑖. We define y𝑖 = [𝑦1, 𝑦2, ..., 𝑦 |𝐶 |] ∈ R |𝐶 | as one-hot encoding vector for 𝑐𝑖 , where 𝑦𝑐𝑖 = 1. The feature vector of 𝑖-th input sample x𝑖 , is notated as z𝑖 ∈ R |𝐶 | . The confidence p comes from 𝜎(z), where 𝜎(·) is a function that normalizes an input vector into a range that leads to probabilistic interpretations, similarly to softmax. In this paper, we used softmax function for 𝜎(·). Θ and Φ represent the parameters of the networks. Interpolation-Based Feature Generation and Pseudo Labeling In AMA, augmented features and labels are created as z (𝑖, 𝑗) = 𝛼 · z𝑖 + (1 − 𝛼) · z 𝑗 𝑐 (𝑖, 𝑗) = { 𝑐𝑖 , if 𝛼 ≥ 0.5 𝑐 𝑗 , if 𝛼 < 0.5 (4) , where z (𝑖, 𝑗) is an augmented feature generated via interpolation of z𝑖 and z 𝑗 selected from different classes, and the pseudo label is 𝑐 (𝑖, 𝑗) . This process occurs in the feature space, and the pseudo labels are determined by controlling a parameter 𝛼 for asymptotically moving them close to the decision boundary. In different with other interpolation-based methods, the labels are definitely determined as one side. Class-Unbiased Random Sampling We consider how to sample original features for interpolation from two different classes to balance pair-wise margins between them. For this purpose, original features are randomly selected from probabilistic distribution in every mini-batches. Let D𝐵 = {(x𝐵,𝑖 , 𝑐𝐵,𝑖) |1 ≤ 𝑖 ≤ 𝑚, 𝑖 ∈ N} be the pairs of input samples and labels in the mini-batch, where the mini-batch size is 𝑚. Then, the probability of selecting (x𝑖 , 𝑐𝑖) from D𝐵 for interpolation is illustrated in Eq. 5: P(x𝐵,𝑖) = 1 𝐶𝐵 · 1 𝑁𝑐𝑖 (5) , where 𝐶𝐵 is the number of classes in the mini-batch and 𝑁𝑐𝑖 is the number of samples of 𝑐𝑖-th class in the mini-batch. This sampling method allows the decision boundary to be placed in the middle of two engaged classes while maximizing the margin. Asymptotic move of Augmented Features Confidence is an important factor in estimating the decision boundary. However, it is unreliable to use the pseudo labels as ground truth in early training because neural networks are prone to predict wrong. To reduce this risk, we propose a scheduler that relies on the training accuracy to update 𝛼 more sensitively, as illustrated in Eq. 6. 𝛼 = 𝑓 (𝑣𝑎𝑐𝑐) = 𝑒−𝛽 ·𝑣𝑎𝑐𝑐 (6) , where 𝑁 is the number of epochs and 𝑣𝑎𝑐𝑐 ∈ [0, 1] means the real value of training accuracy at each epoch. 𝛽 is a hyperparameter to decide how 𝛼 decreases as the training accuracy. We set 𝛽 as 0.67 where 𝛼 exponentially decreased from 1.0 to about 0.5, and empirically figured out the performance consistently shows best when 𝛽 = 0.67 except coarse-to-fine transfer learning environment. Algorithm 1 Example of Applying AMA to Training a Neural Network for Classification Input: model parameter Θ and Φ, cross-entropy loss LCE , AMA loss LAMA , mini-batch size 𝑀 , # mini-batches 𝑁 , balancing parameter 𝛼, learning rate 𝜂 Output: balanced and moderately broad margin 1: D← a set of pairs of input samples and labels 2: 𝑓Θ ← encoder, which parameters are Θ 3: 𝑔Φ ← classifier, which parameters are Φ 4: 𝛼← 1.0 5: for epoch = 1, 2, . . . , 𝑇 do 6: for 𝑖 = 1, 2, . . . , 𝑁 do 7: D𝐵 ← a set of pairs of input samples and labels in the 𝑖-th mini-batch 8: X← {x𝐵,1 ,x𝐵,2 , . . . ,x𝐵,𝑀 } 9: Z← 𝑓Θ (X) 10: Z𝑠 ← a set of original features selected via class-unbiased random sampling by Eq. 5 11: Generate augmented features Z(·,·) and labels c(·,·) from Z𝑠 by Eq. 4 12: LCE ← cross-entropy loss from Z by Eq. 7 13: LAMA ← AMA loss from Z(·,·) by Eq. 8 14: L ← LCE + LAMA 15: Θ← Θ − 𝜂∇ΘL 16: Φ← Φ − 𝜂∇ΦL 17: Update 𝛼 by Eq. 6 18: end for 19: end for Training Loss for Augmented Features AMA uses cross-entropy for the augmented features as original features and integrated with original cross-entropy loss as follows. LCE = ∑︁ z∈Z 𝐶∑︁ 𝑘=1 −𝑦𝑘 log 𝑝𝑘 , where p = 𝜎(z) (7) LAMA = ∑︁ z (𝑖, 𝑗) ∈Z(·,·) 𝐶∑︁ 𝑘=1 −𝑦 (𝑖, 𝑗) 𝑘 log 𝑝 (𝑖, 𝑗) 𝑘 , where p(𝑖, 𝑗) = 𝜎(z (𝑖, 𝑗) ) (8) where Z and Z(𝑖, 𝑗) are the set of features and selected augmented features, respectively, and 𝑝𝑘 is the probability for the 𝑘-th class. An example of integration with a usual classification is shown in Algorithm 1. 3.3 EFFECT ANALYSIS We explain the margin-balancing and moderate margin-broadening effects of AMA and empirically figure out the effects of a simple classification task on a long-tailed toy dataset via qualitative and quantitative analysis. Margin Balancing AMA forces a decision boundary to locate near the midpoint of inter-class features, because the optimum of AMA loss is obtained when the boundary passes the midpoint for the following reasons: 1) class-unbiased random sampling selects the same number of augmented features for every class, 2) the expected distance of two augmented features to their midpoint is equal, and 3) the sum of their confidences determined by the distance 𝑑 is 2𝜎(0.5 + 𝑑) that has the maximum at the midpoint (𝑑 = 0). Using the guidance to the midpoint repeatedly over many updates, the asymptotic move of the augmented features toward the midpoint reduces the possibility of locating the boundary at the intermediate points between the original and augmented features. Because of this convergence to midpoint by AMA loss, its mixture with other losses is still adjusted to balance margin. Moderate Margin Broadening AMA broadens margin than original networks. Generally, loss to maximize confidence increases margin in a simple relation of a feature and a decision boundary. AMA adds the gradient of augmented features to the guidance in the same direction because the features are interpolations of original features and have the same label. On the other side, the original features stop being further away from the boundary after obtaining maximal confidence. Because of nearly zero gradients at the state, the distance of intra-class features to their centroids is moderately preserved without excessive converging pressure. Experimental Setting We randomly generated [1000, 500, 100, 10] training samples and [200, 200, 200, 200] test samples around (-3, 3), (3, 3), (3, -3), and (-3, -3) for four different classes in R2, respectively. All points were randomly sampled from the Gaussian distribution, where mean 𝐷𝑚𝑎𝑥 .) and variance are set to 0 and 1, respectively. We used a 4-layer neural network, which has 128-64-2 hidden units in each layer for baselines and AMA. We set the optimizer as SGD at the momentum of 0.9 and weight decay of 5e-4, and the initial learning rate as 0.1. We used 16 mini-batches, and the total number of epochs was 100. In SupCon, we used the first three layers as an encoder and trained the encoder while maintaining the same settings except for epochs set to 600. Then, the last hidden layer was used as a classifier to predict labels with the same settings. To compare the margin, we visualized feature vectors of input samples as points and their confidences as a heat map on 2-dimensional space. Moreover, we analyzed various distances to quantitatively compare how they affect the margin. Result and Analysis In Figure 2a, AMA learns more balanced margin than the original and SupCon methods. It is shown by the critically narrow area for tail classes (label 2 and 3) compared to the area for head classes (label 0 and 1). Especially, SupCon assigns an extremely large area to the head classes while AMA maintains a relatively similar distance from all boundaries. To investigate the effect of moderate margin-broadening, we quantitatively analyze original, SupCon, and AMA, as shown in Figure 2b. 𝐷𝑟𝑒𝑙𝑎𝑡𝑖𝑣𝑒 indicates the relative margin of inter-class features compared to the total size of feature distribution. AMA shows the best 𝐷𝑟𝑒𝑙𝑎𝑡𝑖𝑣𝑒, which is helpful in increasing inter-class uniformity and neighborhood uniformity while maintaining low 𝐷𝑚𝑎𝑥 . SupCon improves inter-class uniformity by increasing 𝐷𝑐𝑒𝑛𝑡𝑟𝑜𝑖𝑑 , but 𝐷𝑚𝑎𝑥 increases more than about 7× of AMA. The observation implies that AMA only moderately broadens the margin without an excessive expansion of feature distribution as SupCon. 4 EXPERIMENTS We selected two methods as baselines to compare with AMA. SupCon shows our target problem well, and Manifold Mixup is a representative method of feature augmentation. In the followings, all experiments have been run on three different random seeds, and their performances are represented as the mean mean and standard deviation std. In AMA, 𝛽 was set to 0.67 as default, and we only annotate when it has a different value. 4.1 COMMON SETTINGS We conducted experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, which are generally used in image classification benchmarks. Also, we used VGG11( Simonyan & Zisserman (2014)), ResNet32, ResNet50( He et al. (2016)) and DenseNet-BC with 12 growth rate( Huang et al. (2017)). SupCon and Manifold Mixup used the same environmental settings with the following explanation for each task. In Manifold Mixup, we interpolated features only right before the classifier for a fair comparison. 4.2 COARSE-TO-FINE TRANSFER LEARNING TASK Experimental Setting We conducted coarse-to-fine transfer learning on CIFAR-10 and CIFAR-100. We first trained the ResNet50 with a coarse-grained dataset and fine-tuned the linear classifier with a fine-grained dataset. We used 128 minibatches and the SGD at the momentum of 0.9 and weight decay of 5e-4. For CIFAR-100, we set the initial learning rate as 0.1 and divided it by five at the 60th, 120th, and 160th epochs, where the total number of epochs is 200. We composed the coarse-grained dataset by splitting the original dataset into a super-class of them. The fine-grained dataset is the same as the original dataset. For CIFAR-10, we followed the hyperparameter and coarse-to-fine dataset settings in Chen et al. (2022). Result and Analysis As shown in Table 2, AMA achieved the second-best test accuracy, while SupCon suffers intra-class collapse noticed by low accuracy. In a similar context, Manifold Mixup and AMA also have intra-class collapse by showing lower accuracy than the original method. However, AMA achieves better than SupCon and Manifold Mixup, and it means that AMA alleviates intra-class collapse in coarse-to-fine transfer learning. 4.3 LONG-TAILED TASK Experimental Setting We used ResNet32, 256 mini-batches, the SGD at the momentum of 0.9 and weight decay of 5e-4, and the number of epochs is 400. We set the initial learning rate as 0.0 and warmed up for ten epochs by 0.015. After that, we divided the learning rate by ten at 360th and 380th epochs. The more specific settings are illustrated in Cui et al. (2021). Result and Analysis As shown in Table 3, AMA attains the best performance except for the imbalance factor set as 50 and 10 in CIFAR-10-LT. Whereas, SupCon shows the worst performance in a high imbalance factor, which means SupCon has inter-class collapse in the long-tailed datasets while AMA learns balanced margin. For this reason, AMA achieved the highest performance by alleviating inter-class collapse between tail classes. 4.4 ORIGINAL IMAGE CLASSIFICATION BENCHMARKS Experimental Setting We conducted image classification experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet. For CIFAR-10, we set the initial learning rate as 0.05 and divided the learning rate by two at every 30 epochs among the total of 300 epochs for all networks. For CIFAR-100, we used the hyperparameter same as Section 4.2 for all networks. For Tiny-ImageNet on VGG11 and ResNet50, we used 256 mini-batches, the SGD at a momentum of 0.9 without weight decay, and the number of epochs is 200. We set the initial learning rate as 0.1 and multiplied it by 0.9 at every 20 epochs. For DenseNet-BC (𝑘 = 12) on Tiny-ImageNet, we used 64 mini-batches, the SGD at a momentum of 0.9 without weight decay, and the number of epochs is 300. We set the initial learning rate as 0.1 and divided it by ten at 150 and 225 epochs. Result and Analysis As shown in Table 4, AMA achieved competitive or even high performance with other representation augmentation based-models. Specifically for VGG11, AMA retained the highest performance overall. It implies AMA sustains proper alignment and high uniformity without interruption for representation learning. 4.5 ABLATION STUDY We conducted the ablation study to clarify the effects of all parts in AMA: interpolation, classunbiased random sampling and asymptotic move of augmented features. Table 5 shows the effect of components in AMA. In this experiment, we did experiments in coarse-to-fine transfer on CIFAR-100 and in the image classification on CIFAR-100-LT (imbalance factor: 100) with the same settings each. In coarse-to-fine transfer learning, AMA without CR shows the second-best performance. It implies the asymptotic move of augmented features is more stable than simply locating augmented features at the midpoint since the beginning. Class-unbiased random sampling exhibits its impact in the long-tailed dataset. By mitigating unbiased augmented features, the model could learn more balanced margins. Overall, using these two components together shows the best performance proving their synergy in AMA. 4.6 ANALYSIS WITH MIXUP In our motivation experiments, we found that two collapse problems also occur in the data augmentation method as Mixup( Zhang et al. (2017)). For the exploration of AMA to data augmentation approach, we first apply AMA to Mixup and figured out that AMA is helpful to alleviate the collapses in longtailed and coarse-to-fine transfer learning tasks. In this analysis, experimental settings are the same as Sections 4.1, 4.2, and 4.3. Results and Analysis In both experiments, Mixup causes performance degradation overall. However, the mixture of AMA and Mixup shows better performance than using only Mixup and almost recovers the original performance. As a result, feature augmentation helps Mixup alleviate intraclass and inter-class collapses. 5 RELATED WORK 5.1 AUGMENTATION Data augmentation has been one of the effective regularization techniques( Zhang et al. (2017) Shorten & Khoshgoftaar (2019) DeVries & Taylor (2017) Cubuk et al. (2018) Zhong et al. (2020) Moreno-Barea et al. (2018)). Mixup( Zhang et al. (2017)), a generally used approach among data augmentations, interpolates each pair of input samples and labels in the input space. Using this interpolation, it is possible for models to improve their inductive bias. In other streams, data augmentation has been applied to features in feature space, called feature augmentation Verma et al. (2019) Li et al. (2021) Kuo et al. (2020) Lee et al. (2021) Wang et al. (2021)). In Manifold Mixup( Verma et al. (2019)), models get a smoother decision boundary than before, and it results in the improvement of robustness. However, they have not focused on margin, which is an important component to make decision boundary robust, while our proposed method creates augmented features in the feature space and adjusts the augmentation to make the margin balanced and moderately wide. 5.2 CONTRASTIVE LEARNING Contrastive learning achieved state-of-the-art performance in image classification tasks, which is an example of focusing on the margin( Chen et al. (2020) He et al. (2020) Caron et al. (2020) Li et al. (2020) Gutmann & Hyvärinen (2010) Koch et al. (2015) Khosla et al. (2020)). Contrastive learning attracts positive samples and repulses negative samples from the anchor. In supervised approaches, SupCon( Khosla et al. (2020)) uses label information to choose positive pairs and negative pairs. SupCon can effectively get considerable uniformity between inter-class and minor alignment between intra-class. This property leads to ideal representations, which have a large margin between other classes. In spite of these advantages, Supcon has an unavoidable problem of collapse( Jing et al. (2021)) because each sample converged toward the class centroid. This collapse makes features indistinguishable from each other and can lead to poor performance in coarse-to-fine transfer learning( Chen et al. (2022)). In addition, prior works have focused on relatively low performance in long-tailed tasks when using SupCon( Zhu et al. (2022) Li et al. (2022)). In the long-tailed tasks, SupCon leads to overwhelming concentration on head classes, and it encourages the collapse between tail classes. To solve this problem, BCL( Zhu et al. (2022)) used class-average and classcomplement with SupCon loss and TSC( Li et al. (2022)) forced class centroids to form a regular simplex on the hypersphere. In contrast, we learn balanced and moderately broad margin while avoiding collapse by creating augmented features as asymptotically moving to the midpoint. 6 CONCLUSION In this paper, we raised the two collapse problems of feature augmentation, which are recently discussed in contrastive learning literature. We found that the problems were still important in state-of-the-art feature augmentation method as Manifold Mixup by analyzing alignment and uniformity used as indicators of the collapse problems. To address the collapse problems, we proposed Asymptotic Midpoint Augmentation to generate effective features via 1) interpolation of features with pseudo labeling, 2) class-unbiased random sampling of augmented features, and 3) their asymptotic move. The method showed the two effects of margin balancing and moderate-broadening, and their impact on the collapse problems in quantitative and qualitative analysis of a toy long-tailed classification task. In more practical long-tailed and coarse-to-fine transfer learning experiments on CIFAR-10 and CIFAR-100 datasets, which suffered from inter-class and intra-class collapse respectively, AMA significantly alleviated the performance compared to SupCon and Manifold Mixup. Ablation study and relation to data augmentation method as Mixup are also analyzed for validating their deep and broader impact. A limit is that AMA may require additional tuning of hyperparameter 𝛽 to obtain the best performance because of different intensities of the collapse problems by tasks. ETHICS STATEMENT In this paragraph, we address potential concerns below: • studies that involve human subjects: N/A • practices to data set releases: CIFAR-10, CIFAR-100, Tiny-ImageNet, CIFAR-10-LT, CIFAR-100-LT.(See Sections 4.2, 4.3, and 4.4) • potentially harmful insights, methodologies and applications: N/A • potential conflicts of interest and sponsorship, discrimination/bias/fairness concerns, pri- vacy and security issues, legal compliance, and research integrity issues: N/A REPRODUCIBILITY STATEMENT In this paragraph, we capsulize contents for reproducing our results. • Experiment settings 1. A Simple Classification Task on Long-Tailed Toy Dataset: Section 3.3 2. Coarse-to-Fine Transfer Learning: Section4.2 3. Image Classification on Long-Tailed Dataset: Section4.3 4. Image Classification on Classic Dataset: Section4.4 • Code Description in Supplementary material 1. Experimental Details 2. Requirements 3. Training and Evaluation (a) How to run Coarse-to-Fine Transfer Learning (b) How to run Image Classification on Long-Tailed Dataset (c) How to run Image Classification on Classic Dataset 4. Reference
1. What is the focus and contribution of the paper on long-tailed classification? 2. What are the strengths of the proposed approach, particularly in addressing the "alignment" and "uniformity" issues? 3. What are the weaknesses of the paper regarding its experimental scope and comparisons with other works? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 5. Are there any concerns or suggestions regarding the paper's methodology or conclusions?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper proposes a method AMA that generates augmented features gradually approaching the midpoint of inter-class feature pairs. AMA induces two effects: balancing the margin for all classes and moderately broadening the margin until holds maximal confidence. The experiments show the proposed method can achieve performance gain on long-tailed and coarse-to-fine transfer tasks. Strengths And Weaknesses Strength: The paper is well-written and easy-to-follow. The paper considers an interesting problem on long-tailed classification task: the “alignment” and “uniformity” of feature representation. The “alignment” and “uniformity” have been explored from the perspective of self-supervised learning. The authors introduce the idea into long-tailed learning field, which may bring some insight to subsequent researches. Weakness: In fact, there are some long-tailed works that generate augmented features based on Mixup related methods, such as [1]. The related works should be added and discussed. The authors only validate the effectiveness of the proposed method on CIFAR10 and CIFAR100 datasets. The two datasets are small. Larger datasets should be involved in the experiments to show the universality and effectiveness of the proposed method. As shown in Table 3, the authors only compare the proposed method to SupCon, Mixup and TSC. The recent long-tailed works are missed, such as [2][3]. More importantly, the results of the proposed method are worse than the results in [2][3]. The authors should compare with them. Many recent related works are missing from the reference list. [1] Feature Space Augmentation for Long-Tailed Data, in ECCV 2020. [2] Parametric Contrastive Learning, in ICCV 2021. [3] Nested Collaborative Learning for Long-Tailed Visual Recognition, in CVPR 2022. Clarity, Quality, Novelty And Reproducibility This paper is clearly presented and the overall quality is good. The proposed method is marginally novel and seems reproducible.
ICLR
Title AMA: Asymptotic Midpoint Augmentation for Margin Balancing and Moderate Broadening Abstract Feature augmentation in neural networks is an effective regularization method to adjust the margin in feature space. However, a similar approach in terms of directly repositioning features, contrastive learning, has reported collapse problems of inter-class and intra-class features. The augmentation approaches are also related to the issues, but have been barely analyzed. In this paper, we show that feature augmentation methods are also affected by the collapse problems and address them by proposing a novel method to generate augmented features gradually approaching the midpoint of inter-class feature pairs, called asymptotic midpoint augmentation (AMA). The method induces two effects: 1) balancing the margin for all classes and 2) only moderately broadening the margin until it holds maximal confidence. We empirically analyze alignment and uniformity to show vulnerability to the problems in a toy task. Then, we validate its impacts in original, long-tailed, and coarse-to-fine transfer tasks on CIFAR-10 and CIFAR-100. To enhance generality, we additionally analyze its relation to a representative input-level augmentation such as Mixup. 1 INTRODUCTION Augmenting features in neural networks has been effective in regularization by handling margin in feature space( Verma et al. (2019)). The approach generates a feature, which indicates a hidden representation of a layer created from an input, and its confidence information from involved original features. A similar approach in the perspective of directly repositioning features, contrastive learning ( Chen et al. (2020) He et al. (2020)), learns features distant from a decision boundary by getting centroids of classes further away from each other, and gathering positive pairs closer, which decreases intra-class feature distance and increases inter-class feature distance, measured by alignment and uniformity, respectively. In the contrastive learning literature, two problems have been recently discussed: collapse of intra-class and inter-class features ( Li et al. (2022) Chen et al. (2022)). The first problem is reported in coarse-to-fine transfer learning( Chen et al. (2022)), where all features are closely located on the centroids of each class as the alignment excessively decreases. The second problem is introduced in Supervised Contrastive learning (SupCon) ( Khosla et al. (2020)), which uses labels to create positive and negative pairs. The method outperforms other self-supervised learning methods. However, SupCon causes unbalanced margins on long-tailed datasets by overwhelming numerical dominance of the head classes, and it decreases the image classification performance on them. Feature augmentation may also be affected by the collapse problems because of direct feature adjustment. However, the issues have not been deeply analyzed. In this paper, we show that feature augmentation also suffers from the problems by analyzing alignment and uniformity, and propose a novel feature augmentation method to generate augmented features gradually approaching a decision boundary, called Asymptotic Midpoint Augmentation (AMA). AMA has three parts: 1) generating a pool of augmented features by interpolating inter-class feature pairs and pseudo labeling, 2) class-unbiased random sampling, and 3) adaptive interpolation ratio control. The proposed method creates augmented features to make the margin balanced and moderately broad by asymptotically moving them to the midpoint, as shown in Figure 1. As a result, the method shows higher uniformity than before and sufficiently high alignment. In an experiment on a toy task, we validate the effect of collapses by using alignment and uniformity metrics for AMA and other feature relocation methods such as SupCon( Khosla et al. (2020)) and Manifold Mixup( Verma et al. (2019)). We empirically verify the impact of AMA in comparison with the feature augmentation methods in image classification tasks on long-tailed, coarse-to-fine transfer, and original data sets. Additionally, we also analyze the relation of AMA to a representative input-level augmentation method that enhances the different types of information, Mixup( Zhang et al. (2017)) In summary, our main contributions are four-fold: • we raise the inter-class and intra-class collapse issues in feature augmentation approaches and show their impacts by analyzing alignment and uniformity. • we propose a novel feature augmentation method, asymptotic midpoint augmentation, to address the problems by balancing and moderately broadening the margin in feature space. • we empirically analyze the effects and performance of AMA and other feature augmentation methods in image classification tasks on long-tailed datasets and coarse-to-fine transfer learning, which are sensitive to collapses. • we additionally confirm that it maintains performance in the original dataset to inhere uncertain portion of the problems, compare AMA with a representative input-level augmentation method, and analyze their relation. 2 BACKGROUND Intra-class collapse Contrastive loss leads the features of positive pairs to be closed to invariant on the noise factor. In contrastive learning, the encoder is forced to ensure that similar samples must be placed at a similar location in the feature space. However, the attraction between positive pairs makes features gather at one point. This phenomenon limits the expressiveness of the model, and it is especially critical for some tasks such as coarse-to-fin transfer learning. More specifically, if a model is pre-trained by coarse-grained labels and then fine-tuned by fine-grained labels, the model would likely not classify fine-grained samples due to the collapsed features. Especially, features in the same class are prone to collapse on the centroids of the class in supervised contrastive learning. We called this problem as intra-class collapse. To measure the intra-class collapse, intraclass alignment has been proposed, which represents the closeness of positive pairs ( Wang & Isola (2020) Li et al. (2022)). The intra-class alignment can be measured by following: A = 1 𝐶 ∑𝐶 𝑖=1 1 |F𝑖 |2 ∑ v 𝑗 ,v𝑘 ∈F𝑖 ∥v 𝑗 − v𝑘 ∥2 (1) , where 𝐶 is the number of classes, v is a feature vector, and F𝑖 is the set of features from class 𝑖. ∥·∥2 means L2-norm. Inter-class collapse Common contrastive learning methods achieve high performance thanks to the property that centroids of the class get further away through repulsion between negative samples. However, supervised contrastive learning tends to make collapse between features in different classes when the dataset is imbalanced, such as long-tailed datasets. More specifically, the model naturally concentrates on getting a large distance between head classes to minimize the loss. For this reason, the contrastive loss is not evenly weighted on all classes. In this situation, features in tail classes would be collapsed each other. We called this collapse as inter-class collapse, and it prevents the model from learning regular simplex of features, which is a crucial factor when training on imbalanced datasets in contrastive learning. The inter-class collapse can be measured by inter-class and neighborhood uniformity, which are metrics that favor the uniform distribution of representations on the unit hypersphere ( Wang & Isola (2020) Li et al. (2022)). The inter-class uniformity measures the pair-wise distance between different classes, and the neighborhood uniformity inspects the convergence of tail classes. These two kinds of metrics can be measured by following U and U𝑘 , respectively: U = 1 𝐶 (𝐶−1) ∑𝐶 𝑖=1 ∑𝐶 𝑗=1, 𝑗≠𝑖 ∥v̄𝑖 − v̄ 𝑗 ∥2 (2) U𝑘 = 1𝐶𝑘 ∑𝐶 𝑖=1 min𝑗1 , · · · , 𝑗𝑘 (∑𝑘𝑙=1∥v̄𝑖 − v̄ 𝑗𝑙 ∥2) (3) , where v̄𝑖 is the center of samples from class 𝑖 on the hypersphere: v̄𝑖 = ∑ v 𝑗 ∈F𝑖 v 𝑗 ∥∑v 𝑗 ∈F𝑖 v 𝑗 ∥ 2. In this paper, we do not normalize the center of samples by their norm for a fair comparison with the original method and feature augmentation methods, which do not purpose to learning representations on the hypersphere. 3 ASYMPTOTIC MIDPOINT AUGMENTATION In this section, we first present our motivation based on preliminary experiments about alignment and uniformity for augmentation and contrastive learning methods. Then, we introduce asymptotic midpoint augmentations (AMA) and analyze its effects to feature distribution and decision boundaries. 3.1 MOTIVATION Experimental Setting To quantitatively measure the intra-class and inter-class collapses, we inspect intra-class alignment, inter-class uniformity, and top-3 neighborhood uniformity in an image classification task on long-tailed CIFAR-100 where the imbalance factor was set to 100. We analyzed those metrics by Eq. 1, 2, and 3. The thing to note here is that we did not normalize the uniformity by class centers for a fair comparison. The experimental settings here are the same as Section 4.3. Collapse Problems Are Important in Feature Augmentation In Table 1, the evidence of collapses and their unignorable impact are observed. First of all, augmentation methods show higher intra-class alignment than SupCon. Optimal intra-class alignment is uncertain and varies by many factors, but SupCon is known as having excessively low intra-class alignment when intra-class collapse occurs. Therefore, it is reasonable that the augmentation methods are alleviating the collapse effect. According to the background, inter-class collapse reduces inter-class uniformity and neighborhood uniformity, and the augmentation methods gradually get higher values in more recent methods. The two observations show the possibility of resolving collapses via feature augmentation, and the corresponding significant increase in accuracy implies that the impact of the collapses can not be ignored. Additionally, Mixup is a data augmentation method on input space, but it also improves the measures, which shows the difference between the augmentation approach to contrastive learning. We introduce this extended experiment in Section 4.6. 3.2 PROPOSED METHOD Notations Let D = {(x𝑖 , 𝑐𝑖) |1 ≤ 𝑖 ≤ 𝑛, 𝑖 ∈ N} be the set of pairs of an input vector and its label where x𝑖 ∈ R𝑑 and 𝑐𝑖 ∈ 𝐶 for the class index set 𝐶 and the pair index 𝑖. We define y𝑖 = [𝑦1, 𝑦2, ..., 𝑦 |𝐶 |] ∈ R |𝐶 | as one-hot encoding vector for 𝑐𝑖 , where 𝑦𝑐𝑖 = 1. The feature vector of 𝑖-th input sample x𝑖 , is notated as z𝑖 ∈ R |𝐶 | . The confidence p comes from 𝜎(z), where 𝜎(·) is a function that normalizes an input vector into a range that leads to probabilistic interpretations, similarly to softmax. In this paper, we used softmax function for 𝜎(·). Θ and Φ represent the parameters of the networks. Interpolation-Based Feature Generation and Pseudo Labeling In AMA, augmented features and labels are created as z (𝑖, 𝑗) = 𝛼 · z𝑖 + (1 − 𝛼) · z 𝑗 𝑐 (𝑖, 𝑗) = { 𝑐𝑖 , if 𝛼 ≥ 0.5 𝑐 𝑗 , if 𝛼 < 0.5 (4) , where z (𝑖, 𝑗) is an augmented feature generated via interpolation of z𝑖 and z 𝑗 selected from different classes, and the pseudo label is 𝑐 (𝑖, 𝑗) . This process occurs in the feature space, and the pseudo labels are determined by controlling a parameter 𝛼 for asymptotically moving them close to the decision boundary. In different with other interpolation-based methods, the labels are definitely determined as one side. Class-Unbiased Random Sampling We consider how to sample original features for interpolation from two different classes to balance pair-wise margins between them. For this purpose, original features are randomly selected from probabilistic distribution in every mini-batches. Let D𝐵 = {(x𝐵,𝑖 , 𝑐𝐵,𝑖) |1 ≤ 𝑖 ≤ 𝑚, 𝑖 ∈ N} be the pairs of input samples and labels in the mini-batch, where the mini-batch size is 𝑚. Then, the probability of selecting (x𝑖 , 𝑐𝑖) from D𝐵 for interpolation is illustrated in Eq. 5: P(x𝐵,𝑖) = 1 𝐶𝐵 · 1 𝑁𝑐𝑖 (5) , where 𝐶𝐵 is the number of classes in the mini-batch and 𝑁𝑐𝑖 is the number of samples of 𝑐𝑖-th class in the mini-batch. This sampling method allows the decision boundary to be placed in the middle of two engaged classes while maximizing the margin. Asymptotic move of Augmented Features Confidence is an important factor in estimating the decision boundary. However, it is unreliable to use the pseudo labels as ground truth in early training because neural networks are prone to predict wrong. To reduce this risk, we propose a scheduler that relies on the training accuracy to update 𝛼 more sensitively, as illustrated in Eq. 6. 𝛼 = 𝑓 (𝑣𝑎𝑐𝑐) = 𝑒−𝛽 ·𝑣𝑎𝑐𝑐 (6) , where 𝑁 is the number of epochs and 𝑣𝑎𝑐𝑐 ∈ [0, 1] means the real value of training accuracy at each epoch. 𝛽 is a hyperparameter to decide how 𝛼 decreases as the training accuracy. We set 𝛽 as 0.67 where 𝛼 exponentially decreased from 1.0 to about 0.5, and empirically figured out the performance consistently shows best when 𝛽 = 0.67 except coarse-to-fine transfer learning environment. Algorithm 1 Example of Applying AMA to Training a Neural Network for Classification Input: model parameter Θ and Φ, cross-entropy loss LCE , AMA loss LAMA , mini-batch size 𝑀 , # mini-batches 𝑁 , balancing parameter 𝛼, learning rate 𝜂 Output: balanced and moderately broad margin 1: D← a set of pairs of input samples and labels 2: 𝑓Θ ← encoder, which parameters are Θ 3: 𝑔Φ ← classifier, which parameters are Φ 4: 𝛼← 1.0 5: for epoch = 1, 2, . . . , 𝑇 do 6: for 𝑖 = 1, 2, . . . , 𝑁 do 7: D𝐵 ← a set of pairs of input samples and labels in the 𝑖-th mini-batch 8: X← {x𝐵,1 ,x𝐵,2 , . . . ,x𝐵,𝑀 } 9: Z← 𝑓Θ (X) 10: Z𝑠 ← a set of original features selected via class-unbiased random sampling by Eq. 5 11: Generate augmented features Z(·,·) and labels c(·,·) from Z𝑠 by Eq. 4 12: LCE ← cross-entropy loss from Z by Eq. 7 13: LAMA ← AMA loss from Z(·,·) by Eq. 8 14: L ← LCE + LAMA 15: Θ← Θ − 𝜂∇ΘL 16: Φ← Φ − 𝜂∇ΦL 17: Update 𝛼 by Eq. 6 18: end for 19: end for Training Loss for Augmented Features AMA uses cross-entropy for the augmented features as original features and integrated with original cross-entropy loss as follows. LCE = ∑︁ z∈Z 𝐶∑︁ 𝑘=1 −𝑦𝑘 log 𝑝𝑘 , where p = 𝜎(z) (7) LAMA = ∑︁ z (𝑖, 𝑗) ∈Z(·,·) 𝐶∑︁ 𝑘=1 −𝑦 (𝑖, 𝑗) 𝑘 log 𝑝 (𝑖, 𝑗) 𝑘 , where p(𝑖, 𝑗) = 𝜎(z (𝑖, 𝑗) ) (8) where Z and Z(𝑖, 𝑗) are the set of features and selected augmented features, respectively, and 𝑝𝑘 is the probability for the 𝑘-th class. An example of integration with a usual classification is shown in Algorithm 1. 3.3 EFFECT ANALYSIS We explain the margin-balancing and moderate margin-broadening effects of AMA and empirically figure out the effects of a simple classification task on a long-tailed toy dataset via qualitative and quantitative analysis. Margin Balancing AMA forces a decision boundary to locate near the midpoint of inter-class features, because the optimum of AMA loss is obtained when the boundary passes the midpoint for the following reasons: 1) class-unbiased random sampling selects the same number of augmented features for every class, 2) the expected distance of two augmented features to their midpoint is equal, and 3) the sum of their confidences determined by the distance 𝑑 is 2𝜎(0.5 + 𝑑) that has the maximum at the midpoint (𝑑 = 0). Using the guidance to the midpoint repeatedly over many updates, the asymptotic move of the augmented features toward the midpoint reduces the possibility of locating the boundary at the intermediate points between the original and augmented features. Because of this convergence to midpoint by AMA loss, its mixture with other losses is still adjusted to balance margin. Moderate Margin Broadening AMA broadens margin than original networks. Generally, loss to maximize confidence increases margin in a simple relation of a feature and a decision boundary. AMA adds the gradient of augmented features to the guidance in the same direction because the features are interpolations of original features and have the same label. On the other side, the original features stop being further away from the boundary after obtaining maximal confidence. Because of nearly zero gradients at the state, the distance of intra-class features to their centroids is moderately preserved without excessive converging pressure. Experimental Setting We randomly generated [1000, 500, 100, 10] training samples and [200, 200, 200, 200] test samples around (-3, 3), (3, 3), (3, -3), and (-3, -3) for four different classes in R2, respectively. All points were randomly sampled from the Gaussian distribution, where mean 𝐷𝑚𝑎𝑥 .) and variance are set to 0 and 1, respectively. We used a 4-layer neural network, which has 128-64-2 hidden units in each layer for baselines and AMA. We set the optimizer as SGD at the momentum of 0.9 and weight decay of 5e-4, and the initial learning rate as 0.1. We used 16 mini-batches, and the total number of epochs was 100. In SupCon, we used the first three layers as an encoder and trained the encoder while maintaining the same settings except for epochs set to 600. Then, the last hidden layer was used as a classifier to predict labels with the same settings. To compare the margin, we visualized feature vectors of input samples as points and their confidences as a heat map on 2-dimensional space. Moreover, we analyzed various distances to quantitatively compare how they affect the margin. Result and Analysis In Figure 2a, AMA learns more balanced margin than the original and SupCon methods. It is shown by the critically narrow area for tail classes (label 2 and 3) compared to the area for head classes (label 0 and 1). Especially, SupCon assigns an extremely large area to the head classes while AMA maintains a relatively similar distance from all boundaries. To investigate the effect of moderate margin-broadening, we quantitatively analyze original, SupCon, and AMA, as shown in Figure 2b. 𝐷𝑟𝑒𝑙𝑎𝑡𝑖𝑣𝑒 indicates the relative margin of inter-class features compared to the total size of feature distribution. AMA shows the best 𝐷𝑟𝑒𝑙𝑎𝑡𝑖𝑣𝑒, which is helpful in increasing inter-class uniformity and neighborhood uniformity while maintaining low 𝐷𝑚𝑎𝑥 . SupCon improves inter-class uniformity by increasing 𝐷𝑐𝑒𝑛𝑡𝑟𝑜𝑖𝑑 , but 𝐷𝑚𝑎𝑥 increases more than about 7× of AMA. The observation implies that AMA only moderately broadens the margin without an excessive expansion of feature distribution as SupCon. 4 EXPERIMENTS We selected two methods as baselines to compare with AMA. SupCon shows our target problem well, and Manifold Mixup is a representative method of feature augmentation. In the followings, all experiments have been run on three different random seeds, and their performances are represented as the mean mean and standard deviation std. In AMA, 𝛽 was set to 0.67 as default, and we only annotate when it has a different value. 4.1 COMMON SETTINGS We conducted experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet, which are generally used in image classification benchmarks. Also, we used VGG11( Simonyan & Zisserman (2014)), ResNet32, ResNet50( He et al. (2016)) and DenseNet-BC with 12 growth rate( Huang et al. (2017)). SupCon and Manifold Mixup used the same environmental settings with the following explanation for each task. In Manifold Mixup, we interpolated features only right before the classifier for a fair comparison. 4.2 COARSE-TO-FINE TRANSFER LEARNING TASK Experimental Setting We conducted coarse-to-fine transfer learning on CIFAR-10 and CIFAR-100. We first trained the ResNet50 with a coarse-grained dataset and fine-tuned the linear classifier with a fine-grained dataset. We used 128 minibatches and the SGD at the momentum of 0.9 and weight decay of 5e-4. For CIFAR-100, we set the initial learning rate as 0.1 and divided it by five at the 60th, 120th, and 160th epochs, where the total number of epochs is 200. We composed the coarse-grained dataset by splitting the original dataset into a super-class of them. The fine-grained dataset is the same as the original dataset. For CIFAR-10, we followed the hyperparameter and coarse-to-fine dataset settings in Chen et al. (2022). Result and Analysis As shown in Table 2, AMA achieved the second-best test accuracy, while SupCon suffers intra-class collapse noticed by low accuracy. In a similar context, Manifold Mixup and AMA also have intra-class collapse by showing lower accuracy than the original method. However, AMA achieves better than SupCon and Manifold Mixup, and it means that AMA alleviates intra-class collapse in coarse-to-fine transfer learning. 4.3 LONG-TAILED TASK Experimental Setting We used ResNet32, 256 mini-batches, the SGD at the momentum of 0.9 and weight decay of 5e-4, and the number of epochs is 400. We set the initial learning rate as 0.0 and warmed up for ten epochs by 0.015. After that, we divided the learning rate by ten at 360th and 380th epochs. The more specific settings are illustrated in Cui et al. (2021). Result and Analysis As shown in Table 3, AMA attains the best performance except for the imbalance factor set as 50 and 10 in CIFAR-10-LT. Whereas, SupCon shows the worst performance in a high imbalance factor, which means SupCon has inter-class collapse in the long-tailed datasets while AMA learns balanced margin. For this reason, AMA achieved the highest performance by alleviating inter-class collapse between tail classes. 4.4 ORIGINAL IMAGE CLASSIFICATION BENCHMARKS Experimental Setting We conducted image classification experiments on CIFAR-10, CIFAR-100, and Tiny-ImageNet. For CIFAR-10, we set the initial learning rate as 0.05 and divided the learning rate by two at every 30 epochs among the total of 300 epochs for all networks. For CIFAR-100, we used the hyperparameter same as Section 4.2 for all networks. For Tiny-ImageNet on VGG11 and ResNet50, we used 256 mini-batches, the SGD at a momentum of 0.9 without weight decay, and the number of epochs is 200. We set the initial learning rate as 0.1 and multiplied it by 0.9 at every 20 epochs. For DenseNet-BC (𝑘 = 12) on Tiny-ImageNet, we used 64 mini-batches, the SGD at a momentum of 0.9 without weight decay, and the number of epochs is 300. We set the initial learning rate as 0.1 and divided it by ten at 150 and 225 epochs. Result and Analysis As shown in Table 4, AMA achieved competitive or even high performance with other representation augmentation based-models. Specifically for VGG11, AMA retained the highest performance overall. It implies AMA sustains proper alignment and high uniformity without interruption for representation learning. 4.5 ABLATION STUDY We conducted the ablation study to clarify the effects of all parts in AMA: interpolation, classunbiased random sampling and asymptotic move of augmented features. Table 5 shows the effect of components in AMA. In this experiment, we did experiments in coarse-to-fine transfer on CIFAR-100 and in the image classification on CIFAR-100-LT (imbalance factor: 100) with the same settings each. In coarse-to-fine transfer learning, AMA without CR shows the second-best performance. It implies the asymptotic move of augmented features is more stable than simply locating augmented features at the midpoint since the beginning. Class-unbiased random sampling exhibits its impact in the long-tailed dataset. By mitigating unbiased augmented features, the model could learn more balanced margins. Overall, using these two components together shows the best performance proving their synergy in AMA. 4.6 ANALYSIS WITH MIXUP In our motivation experiments, we found that two collapse problems also occur in the data augmentation method as Mixup( Zhang et al. (2017)). For the exploration of AMA to data augmentation approach, we first apply AMA to Mixup and figured out that AMA is helpful to alleviate the collapses in longtailed and coarse-to-fine transfer learning tasks. In this analysis, experimental settings are the same as Sections 4.1, 4.2, and 4.3. Results and Analysis In both experiments, Mixup causes performance degradation overall. However, the mixture of AMA and Mixup shows better performance than using only Mixup and almost recovers the original performance. As a result, feature augmentation helps Mixup alleviate intraclass and inter-class collapses. 5 RELATED WORK 5.1 AUGMENTATION Data augmentation has been one of the effective regularization techniques( Zhang et al. (2017) Shorten & Khoshgoftaar (2019) DeVries & Taylor (2017) Cubuk et al. (2018) Zhong et al. (2020) Moreno-Barea et al. (2018)). Mixup( Zhang et al. (2017)), a generally used approach among data augmentations, interpolates each pair of input samples and labels in the input space. Using this interpolation, it is possible for models to improve their inductive bias. In other streams, data augmentation has been applied to features in feature space, called feature augmentation Verma et al. (2019) Li et al. (2021) Kuo et al. (2020) Lee et al. (2021) Wang et al. (2021)). In Manifold Mixup( Verma et al. (2019)), models get a smoother decision boundary than before, and it results in the improvement of robustness. However, they have not focused on margin, which is an important component to make decision boundary robust, while our proposed method creates augmented features in the feature space and adjusts the augmentation to make the margin balanced and moderately wide. 5.2 CONTRASTIVE LEARNING Contrastive learning achieved state-of-the-art performance in image classification tasks, which is an example of focusing on the margin( Chen et al. (2020) He et al. (2020) Caron et al. (2020) Li et al. (2020) Gutmann & Hyvärinen (2010) Koch et al. (2015) Khosla et al. (2020)). Contrastive learning attracts positive samples and repulses negative samples from the anchor. In supervised approaches, SupCon( Khosla et al. (2020)) uses label information to choose positive pairs and negative pairs. SupCon can effectively get considerable uniformity between inter-class and minor alignment between intra-class. This property leads to ideal representations, which have a large margin between other classes. In spite of these advantages, Supcon has an unavoidable problem of collapse( Jing et al. (2021)) because each sample converged toward the class centroid. This collapse makes features indistinguishable from each other and can lead to poor performance in coarse-to-fine transfer learning( Chen et al. (2022)). In addition, prior works have focused on relatively low performance in long-tailed tasks when using SupCon( Zhu et al. (2022) Li et al. (2022)). In the long-tailed tasks, SupCon leads to overwhelming concentration on head classes, and it encourages the collapse between tail classes. To solve this problem, BCL( Zhu et al. (2022)) used class-average and classcomplement with SupCon loss and TSC( Li et al. (2022)) forced class centroids to form a regular simplex on the hypersphere. In contrast, we learn balanced and moderately broad margin while avoiding collapse by creating augmented features as asymptotically moving to the midpoint. 6 CONCLUSION In this paper, we raised the two collapse problems of feature augmentation, which are recently discussed in contrastive learning literature. We found that the problems were still important in state-of-the-art feature augmentation method as Manifold Mixup by analyzing alignment and uniformity used as indicators of the collapse problems. To address the collapse problems, we proposed Asymptotic Midpoint Augmentation to generate effective features via 1) interpolation of features with pseudo labeling, 2) class-unbiased random sampling of augmented features, and 3) their asymptotic move. The method showed the two effects of margin balancing and moderate-broadening, and their impact on the collapse problems in quantitative and qualitative analysis of a toy long-tailed classification task. In more practical long-tailed and coarse-to-fine transfer learning experiments on CIFAR-10 and CIFAR-100 datasets, which suffered from inter-class and intra-class collapse respectively, AMA significantly alleviated the performance compared to SupCon and Manifold Mixup. Ablation study and relation to data augmentation method as Mixup are also analyzed for validating their deep and broader impact. A limit is that AMA may require additional tuning of hyperparameter 𝛽 to obtain the best performance because of different intensities of the collapse problems by tasks. ETHICS STATEMENT In this paragraph, we address potential concerns below: • studies that involve human subjects: N/A • practices to data set releases: CIFAR-10, CIFAR-100, Tiny-ImageNet, CIFAR-10-LT, CIFAR-100-LT.(See Sections 4.2, 4.3, and 4.4) • potentially harmful insights, methodologies and applications: N/A • potential conflicts of interest and sponsorship, discrimination/bias/fairness concerns, pri- vacy and security issues, legal compliance, and research integrity issues: N/A REPRODUCIBILITY STATEMENT In this paragraph, we capsulize contents for reproducing our results. • Experiment settings 1. A Simple Classification Task on Long-Tailed Toy Dataset: Section 3.3 2. Coarse-to-Fine Transfer Learning: Section4.2 3. Image Classification on Long-Tailed Dataset: Section4.3 4. Image Classification on Classic Dataset: Section4.4 • Code Description in Supplementary material 1. Experimental Details 2. Requirements 3. Training and Evaluation (a) How to run Coarse-to-Fine Transfer Learning (b) How to run Image Classification on Long-Tailed Dataset (c) How to run Image Classification on Classic Dataset 4. Reference
1. What is the focus of the paper regarding learned representation constraints? 2. What are the strengths and weaknesses of the proposed approach, particularly in addressing inter- and intra-class collapse? 3. Do you have any questions or concerns about the evaluation process, especially when it comes to practical applications like data with imbalanced label sets? 4. How do the regularization strategies affect performance relative to the baseline, and what trade-offs are involved? 5. Can you provide more information on choosing the layer where to apply the proposal, and whether it could be applied to several layers like manifold mixup? 6. What is the computational overhead of the proposal, and how does it impact performance? 7. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper This paper studies strategies aimed at imposing constraints over learned representations. In particular, authors focus on devising regularization strategies where inter- and intra-class collapse are avoided, since those issues might affect downstream performance once one uses learned features in downstream tasks, possibly with finer-grained label sets relative to the original data where representation were learned. Strengths And Weaknesses Pros: +The proposal is simple and addresses a relevant problem. +The evaluation covers cases that are relevant from a practical perspective such as data with imbalanced label sets where features of long tail classes often collapse. Cons: -For β = 0.67 , equation 6 is such that α > 0.5 , so only one of the cases in equation 4 is actually possible for pseudo-labeling. -As per results in Table 2, all regularization strategies seem to decrease performance relative to the baseline where no regularization is applied. Even if performance improves slightly at the long-tail cases, it seems to come at a cost. -How do authors choose the layer where to apply the proposal? Similarly to manifold mixup, could it be applied to several layers? -It's unclear what kind of compute overhead the proposal incurs. Clarity, Quality, Novelty And Reproducibility Clarity: the current version of the text requires improvements. There are grammar issues and the notation is a bit confusing (e.g., the use of σ ( z ) for s o f t m a x ( g ( z ) ) ). Quality: There are issues requiring addressing prior to publication. For instance, what authors refer to as manifold mixup is only applied on the outputs of a specific layer, as opposed to the original method that operates at different levels. Results in table 4 seem to correspond to error rates rather than accuracies as stated in its caption. Also, results should be reported in terms of 95% confidence intervals rather than the mean+-std format, since that would give a clear idea of overlapping results. How many independent runs were performed in order to obtain such results? Novelty: The approach seems to me as a variation of mixup at the feature level, with a new scheme for pseudo-labeling. Reproducibility: The featuring mixing scheme is clear up to the pairing strategy which, for me, would require access to code in order for it to be re-implemented.
ICLR
Title Shift Aggregate Extract Networks Abstract The Shift Aggregate Extract Network (SAEN) is an architecture for learning representations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying shift, aggregate and extract operations on the vector representations of its parts. We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art. 1 INTRODUCTION Many different problems in various fields of science require the classification of structured data, i.e. collections of objects bond together by some kind of relation. A natural way to represent such structures is through graphs, which are able to encode both the individual objects composing the collection (as vertices) and the relationships between them (as edges). A number of approaches to the graph classification problem has been studied in graph kernel and neural network literature. Graph kernels decompose input graphs in substructures such as shortest paths (Borgwardt & Kriegel, 2005), graphlets (Shervashidze et al., 2009) or neighborhood subgraph pairs (Costa & De Grave, 2010). The similarity between two graphs is then computed by comparing the respective sets of parts. Methods based on recursive neural networks unfold a neural network over input graphs and learn vector representations of their nodes employing backpropagation though structure (Goller & Kuchler, 1996). Recursive neural networks have been successfully applied to domains such as natural language (Socher et al., 2011) and biology (Vullo & Frasconi, 2004; Baldi & Pollastri, 2003). An advantage of recursive neural networks over graph kernels, is that the vector representations of the input graphs are learnt rather than handcrafted. Learning on social network data can be considerably hard due to their peculiar structure: as opposed to chemical compounds and parse trees, the structure of social network graphs is highly irregular. Indeed in social networks it is common to have nodes in the same graph whose degree differs by orders of magnitude. This poses a significant challenge for the substructure matching approach used by some graph kernels as the variability in connectivity generates a large number of unique patterns leading to diagonally dominant kernel matrices. We propose Shift Aggregate Extract Networks (SAEN), a neural network architecture for learning representations of input graphs. SAEN decomposes input graphs intoH-hierarchies made of multiple strata of objects. Objects in each stratum are connected by “part-of” relations to the objects to the stratum above. In case we wish to classify graphs we can use an H-hierarchical decomposition in which the top stratum contains the graphG that we want to classify, while the intermediate strata contain subgraphs of G, subgraphs of subgraphs of G and so on, until we reach the bottom stratum which contains the vertices v of G. Unlike R-convolution relations in kernel methods (which decompose objects into the set of their parts), H-hierarchical decompositions are deep as they can represent the parts of the parts of an object. Recursive neural networks associate to the vertices of the input graphs vector representations imposing that they have identical dimensions. Moreover, the propagation follows the edge connectivity and weights are shared over the whole input graph. If we consider that vector representations of nodes (whose number of parents can differ by orders of magnitude) must share the same weights, learning on social network data with recursive neural networks might be nontrivial. SAEN compensates the limitations of recursive neural networks by adding the following degrees of flexibility: 1. the SAEN computation schema unfolds a neural network over H-decompositions instead of the input graph, 2. SAEN imposes weight sharing and fixed size of the learnt vector representations on a per stratum basis instead of globally. Indeed SAEN allows to use vector representations of different sizes for different strata of objects (e.g. graphs, subgraphs, subgraphs of subgraphs, edges, vertices etc.) The SAEN schema computes the vector representation of each object by applying shift, aggregate and extract operations on the vector representations of its parts. Another contribution of this paper is the introduction of a domain compression algorithm, that we use in our experiments to reduce memory usage and runtime. Domain compression collapses objects in the same stratum of an H-hierarchical decomposition into a compressed one whenever these objects are indistinguishable for the SAEN computation schema. In particular objects made of the same sets of parts are indistinguishable. In order obtain a lossless compression an H-hierarchical decomposition we store counts on symmetries adopting some mathematical results from lifted linear programming (Mladenov et al., 2012). The domain compression algorithm is also reminiscent of the work of Sperduti & Starita (1997) in which common substructures of recursive neural networks are collapsed in order to reduce the computational cost. 2 SHIFT-AGGREGATE-EXTRACT NEURAL NETWORKS We propose a neural network architecture that takes as input an undirected attributed graph G = (V,E,X) where V is the vertex set, E ⊆ V × V is the edge set, and X = {xv ∈ Rp}v∈V is a set of p-dimensional vertex attributes. When vertices do not have associated attributes (for example this happens in some of the social network datasets of § 4.1), we can set xv to some vertex invariant such as node centrality or betweenness. 2.1 H-HIERARCHICAL DECOMPOSITIONS Most graph kernels decompose graphs into parts by using an R-convolution relation (Haussler, 1999). We extend this approach by decomposing graphs into a hierarchy of π-parametrized “part of” relations. Formally, anH-hierarchical decomposition is a pair ({Sl}Ll=0, {Rl,π}Ll=1) where: • {Sl}Ll=0 are disjoint sets of objects Sl called strata, or levels of the hierarchy. The bottom stratum S0 contains non-decomposable objects (e.g. individual vertices), while the other strata Sl, l = 1, . . . , L contain composite objects, oi ∈ Sl, whose parts oj ∈ Sl−1 belong to the preceding stratum, Sl−1. • {Rl,π}Ll=1 is a set of l, π-parametrized Rl,π-convolution relations. A pair (oi, oj) ∈ Sl × Sl−1 belongs toRl,π iff “oj is part of oi with membership type π”. For notational convenience, the parts of oi are denoted asR−1l,π(oi) = {oj |(oj , oi) ∈ Rl,π}. The membership type π is used to represent the roles of the parts of an object. For example, we could decompose a graph as a multiset of π-neighborhood subgraphs 1 in which π is the radius of the neighborhoods (see Figure 1 on the left). Another possible use of the π membership type is to 1The r-neighborhood subgraph (or ego graph) of a vertex v in a graph G is the induced subgraph of G consisting of all vertices whose shortest-path distance from v is at most r. distinguish the root from the other vertices in a rooted neighborhood subgraph (see Figure 1 on the right). An H-hierarchical decomposition is a multilevel generalization of R-convolution relations, and it reduces to anR-convolution relation for L = 1. 2.2 SHIFT AGGREGATE EXTRACT SCHEMA FOR LEARNING REPRESENTATIONS We propose Shift Aggregate Extract Network (SAEN) to learn vector representations for all the objects of all the strata {Sl}Ll=0 in an H-hierarchical decomposition. SAEN unfolds a neural network architecture over anH-hierarchical decomposition by using the Shift Aggregate Extract (SAE) schema. According to the SAE schema the vector representation of each object in theH-hierarchical decomposition is either computed by applying a neural network on the vertex attributes (for the objects in bottom stratum) or defined in terms of the vector representations of its parts (for the other objects). More formally, the SAE schema associates a dl-dimensional representation hi ∈ Rdl to each object oi ∈ Sl of theH-hierarchical decomposition according to the following formula: hi = f0(xvi ; Θ0) if oi ∈ S0 fl ( ∑ π∈Πl ∑ oj∈R−1l,π(oi) (zπ ⊗ hj)︸ ︷︷ ︸ Shift︸ ︷︷ ︸ Aggregate ; Θl ) ︸ ︷︷ ︸ Extract otherwise (1) where fl(·; Θl), l = 0, . . . , L are multilayer neural networks with parameters Θl. With respect to the base case (first branch of Eq. 1) we have that each object oi in the bottom stratum S0 is in one-to-one correspondence with the vertices vi ∈ V of the graph that we are decomposing. Indeed the vector representations hi are computed by evaluating f0(·; Θ0) in correspondence of the vertex attributes xvi ∈ X . The recursion step (second branch of Eq. 1) follows the Shift Aggregate Extract (SAE) schema: • Shift: each part representation hj ∈ Rdl−1 is remapped into a space R|Πldl−1| made of |Πl| slots, where each slot has dimension dl−1. This transformation shifts part representations hj by using the Kronecker product ⊗ between an indicator vector zπ ∈ R|Πl| and the vector representation hj of part oj ∈ Sl−1. The indicator vector zπ ∈ R|Πl| defined as zi = { 1 if i=π 0 otherwise. and it is used to make sure that vector representations hj of object parts will fall in the same slot if and only if they have the same membership type π. • Aggregate: the shifted representations (zπ ⊗ hj) of the parts oj are then aggregated with a sum. • Extract: the aggregated representation is compressed to a dl-dimensional space by a Θlparametrized nonlinear map fl(·,Θl) : R|Πldl−1| → Rdl implemented with a multilayer neural network. The shift and aggregate steps, that we have seen so far, are identical to those used in kernel design when computing the explicit feature of a kernel k(x, z) derived from a sum ∑ π∈Π kπ(x, z) of base kernels kπ(x, z), π ∈ Π. In principle, it would be indeed possible to turn SAEN into a kernel method by removing the extraction step E from the SAE schema. However, such an approach would increase the dimensionality of the feature space by a multiplicative factor |Πl| for each level l of the Hhierarchical decomposition, thus leading to an exponential number of features. When using SAEN, the feature space growth is prevented by exploiting a distributed representation (via a multilayered neural network) during the E step of the SAE schema. As a result, SAEN can easily cope with Hhierarchical decompositions consisting of multiple strata. 2.3 EXPLOITING SYMMETRIES FOR DOMAIN COMPRESSION In this section we propose a technique, called domain compression, which allows to save memory and speedup the SAEN computation. Domain compression exploits symmetries inH-hierarchical decompositions by collapsing equivalent objects in each stratum. The greater the number of collapsed objects the highest the compression ratio. Two objects a, b in a stratum Sl are collapsable a ∼ b if they share the same representation (i.e. ha = hb) for all the possible values of Θl. A compressed stratum S comp l is the quotient set Sl/∼ of stratum Sl w.r.t. the collapsibility relation ∼. We assume that the attributes of the elements in the bottom stratum S0 are categorical, so that the same vector representation can be shared by multiple elements with non-zero probability. 2 While objects in the bottom stratum S0 are collapsable when their attributes are identical, for all the other strata Sl, l = 1, . . . , L, objects are collapsable if they are made by the same sets of parts for all the membership types π. In Figure 2 we provide a pictorial representation of the domain compression of an H-hierarchical decomposition (EGNN, described in § 4.2). On the left we show the H-hierarchical decomposition of a graph taken from the IMDB-BINARY dataset (see § 4.1) together with its compressed version on the right. 2.3.1 DOMAIN COMPRESSION ALGORITHM In order to compress H-hierarchical decompositions we adapt the lifted linear programming technique proposed by Mladenov et al. (2012) to the SAEN architecture. If a matrix M ∈ Rn×p has 2 Vectors of real valued attributes could be discretized using clustering techniques. However, we leave discretization in SAEN to future works. m ≤ n distinct rows it can be decomposed as the product DM comp where M comp is a compressed version of M in which the distinct rows of M appear exactly once. The Boolean decompression matrix, D, encodes the collapsibility relation among the rows of M so that Dij = 1 iff the ith row of M falls in the equivalence class j of ∼. A pseudo-inverse C of D can be computed by dividing the rows of D> by their sum (where D> is the transpose of D). Example 1 If we look at matrix M in Eq. 2 we notice that row 1 and 4 share the encoding [0, 0, 0], rows 3 and 5 share the encoding [1, 1, 0] while the encoding [1, 0, 1] appears only once at row 2. Matrix M comp is the compressed version of M . M = 0 0 0 1 0 1 1 1 0 0 0 0 1 1 0 M comp = [ 0 0 0 1 0 1 1 1 0 ] D = 1 0 0 0 1 0 0 0 1 1 0 0 0 0 1 C = [ 1/2 0 0 1/2 0 0 1 0 0 0 0 0 1/2 0 1/2 ] (2) Matrix M can be expressed as the matrix product between the decompression matrix D and the compressed version of M comp (i.e. M = DM comp), while the matrix multiplication between the compression matrix C and the M leads to the compressed matrix M comp (i.e.M comp = CM ). To apply domain compression we rewrite Eq. 1 in matrix form as follows: Hl = f0(X; Θ0)︸ ︷︷ ︸ |S0|×d0 if l = 0 fl [ Rl,1, . . . , Rl,π, . . . , Rl,|Πl| ] ︸ ︷︷ ︸ |Sl|×|Πl||Sl−1| Hl−1 . . . 0 ... . . . ... 0 . . . Hl−1 ︸ ︷︷ ︸ |Πl||Sl−1|×|Πl|dl−1 ; Θl ︸ ︷︷ ︸ |Sl|×dl otherwise (3) where: • Hl ∈ R|Sl|×dl is the matrix that represents the dl-dimensional encodings of the objects in Sl. The rows of Hl are the vector representations hi in Eq. 1, while the rows of Hl−1 are the vector representations hj in Eq. 1; • X ∈ R|S0|×p is the matrix that represents the p-dimensional encodings of the vertex attributes in V (i.e. the rows of X are the xvi of Eq. 1); • fl(·; Θl) is unchanged w.r.t. Eq. 1 and is applied to its input matrices row-wise; • Rl,π ∈ R|Sl|×|Sl−1| ∀π ∈ Πl are the matrix representations of the Rl,π-convolution relations of Eq. 1 whose elements are (Rl,π)ij = 1 if (oj , oi) ∈ Rl,π and 0 otherwise. Domain compression on Eq. 3 is performed by the DOMAIN-COMPRESSION procedure (see Algorithm 3) that takes as input the attribute matrix X and the part-of matrices Rl,π and returns their compressed versions Xcomp and the Rcompl,π respectively. The algorithm starts by invoking (line 1) the procedure COMPUTE-CD on X to obtain the compression and decompression matrices C0 and D0 respectively. The compression matrix C0 is used to compress X (line 2) then we start iterating over the levels l = 0, . . . , L of the H-hierarchical decomposition (line 4) and compress the Rl,π matrices. The compression of the Rl,π matrices is done by right-multiplying them by the decompression matrixDl−1 of the previous level l−1 (line 5). In this way we collapse the parts of relation Rl,π (i.e. the columns of Rl,π) as these were identified in stratum Sl−1 as identical objects (i.e. those objects corresponding to the rows of X or Rl−1,π collapsed during the previous step). The result is a list Rcol comp = [Rl,πDl−1, ∀π = 1, . . . , |Πl|] of column compressed Rl,π−matrices. We proceed collapsing equivalent objects in stratum Sl, i.e. those made of identical sets of parts: we find symmetries in Rcol comp by invoking COMPUTE-CD (line 6) and obtain a new pair Cl, Dl of compression, and decompression matrices respectively. Finally the compression matrix Cl is applied to the column-compressed matrices inRcol comp in order to obtain the Πl compressed matrices DOMAIN-COMPRESSION(X,R) 1 C0, D0 = COMPUTE-CD(X) 2 Xcomp = C0X // Compress the X matrix. 3 Rcomp = {} // Initialize an empty container for compressed matrices. 4 for l = 1 to L 5 Rcol comp = [Rl,πDl−1, ∀π = 1, . . . , |Πl|] // column compression 6 Cl, Dl = COMPUTE-CD(Rcol comp) 7 for π = 1 to |Πl| 8 Rcompl,π = ClR col comp π // row compression 9 return Xcomp, Rcomp Figure 3: DOMAIN-COMPRESSION of stratum Sl (line 8). Algorithm 3 allows us to compute the domain compressed version of Eq. 3 which can be obtained by replacing: X with Xcomp = C0X , Rl,π with R comp l,π = ClRl,πDl−1 and Hl with H comp l . Willing to recover the original encodings Hl we just need to employ the decompression matrix Dl on the compressed encodings H comp l , indeed Hl = DlH comp l . As we can see by substituting Sl with S comp l , the more are the symmetries (i.e. when |S comp l | |Sl|) the greater the domain compression will be. 3 RELATED WORKS When learning with graph inputs two fundamental design aspects that must be taken into account are: the choice of the pattern generator and the choice of the matching operator. The former decomposes the graph input in substructures while the latter allows to compare the substructures. Among the patterns considered from the graph kernel literature we have paths, shortest paths, walks (Kashima et al., 2003), subtrees (Ramon & Gärtner, 2003; Shervashidze et al., 2011) and neighborhood subgraphs (Costa & De Grave, 2010). The similarity between graphs G and G′ is computed by counting the number of matches between their common the substructures (i.e. a kernel on the sets of the substructures). The match between two substructures can be defined by using graph isomorphism or some other weaker graph invariant. When the number of substructures to enumerate is infinite or exponential with the size of the graph (perhaps this is the case for random walks and shortest paths respectively) the kernel between the two graphs is computed without generating an explicit feature map. Learning with an implicit feature map is not scalable as it has a space complexity quadratic in the number of training examples (because we need to store in memory the gram matrix). Other graph kernels such as the Weisfeiler-Lehman Subtree Kernel (WLST) (Shervashidze et al., 2011) and the Neighborhood Subgraph Pairwise Distance Kernel (NSPDK) (Costa & De Grave, 2010) deliberately choose a pattern generator that scales polynomially and produces an explicit feature map. However the vector representations produced by WLST and NSPDK are handcrafted and not learned. A recent work by Yanardag & Vishwanathan (2015) proposes to uses pattern generators such as graphlets, shortest paths and WLST subtrees to transform input graphs into documents. The generated substructures are then treated as words and embedded in the Euclidean space with a CBOW or a Skip-gram model. The deep upgrade of existing graph kernels is performed by reweighing the counts of the substructures by the square root of their word-vector self similarity. Another recent work by Niepert et al. (2016) upgrades the convolutional neural networks CNNs for images to graphs. While the receptive field of a CNN is usually a square window (Niepert et al., 2016) employ neighborhood subgraphs as receptive fields. As nodes in graphs do not have a specific temporal or spatial order, (Niepert et al., 2016) employ vertex invariants to impose an order on the nodes of the subgraphs/receptive fields. 4 EXPERIMENTAL EVALUATION We answer to the following experimental questions: Q1 How does SAEN compare to the state of the art? Q2 Can SAEN exploit symmetries in social networks to reduce the memory usage and the runtime? 4.1 DATASETS In order to answer the experimental questions we tested our method on six publicly available datasets first proposed by Yanardag & Vishwanathan (2015). • COLLAB is a dataset where each graph represent the ego-network of a researcher, and the task is to determine the field of study of the researcher between High Energy Physics, Condensed Matter Physics and Astro Physics. • IMDB-BINARY, IMDB-MULTI are datasets derived from IMDB where in each graph the vertices represent actors/actresses and the edges connect people which have performed in the same movie. Collaboration graphs are generated from movies belonging to genres Action and Romance for IMDB-BINARYand Comedy, Romance and Sci-Fi for IMDB-MULTI, and for each actor/actress in those genres an ego-graph is extracted. The task is to identify the genre from which the ego-graph has been generated. • REDDIT-BINARY, REDDIT-MULTI5K, REDDIT-MULTI12K are datasets where each graph is derived from a discussion thread from Reddit. In those datasets each vertex represent a distinct user and two users are connected by an edge if one of them has responded to a post of the other in that discussion. The task in REDDIT-BINARYis to discriminate between threads originating from a discussion-based subreddit (TrollXChromosomes, atheism) or from a question/answers-based subreddit (IAmA, AskReddit). The task in REDDIT-MULTI5Kand REDDIT-MULTI12Kis a multiclass classification problem where each graph is labeled with the subreddit where it has originated (worldnews, videos, AdviceAnimals, aww, mildlyinteresting for REDDIT-MULTI5Kand AskReddit, AdviceAnimals, atheism, aww, IAmA, mildlyinteresting, Showerthoughts, videos, todayilearned, worldnews, TrollXChromosomes for REDDIT-MULTI12K). 4.2 EXPERIMENTS In our experiments we chose an H-hierarchical decomposition called Ego Graph Neural Network (EGNN), that mimics the graph kernel NSPDK with the distance parameter set to 0. Before applying EGNN we turn unattributed graphs (V,E) into attributed graphs (V,E,X) by annotating their vertices v ∈ V with attributes xv ∈ X . We label vertices v of G with their degree and encode this information into the attributes xv by employing the 1-hot encoding. EGNN decomposes attributed graphs G = (V,E,X) into a 3 level H-hierarchical decomposition with the following strata (see Figure 1 for a pictorial representation of EGNN): • stratum S0 contains objects ov that are in one-to-one correspondence with the vertices v ∈ V . • stratum S1 contains vroot-rooted r-neighborhood subgraphs (i.e. ego graphs) e = (vroot, Ve, Ee) of radius r = 0, 1, . . . , R and has part-of alphabet Π1 = {ROOT, ELEM}. Objects ov ∈ S0 are “ELEM-part-of” ego graph e if v ∈ Ve \ {vroot}, while the are “ROOT-part-of” ego graph e if v = vroot. • stratum S2 contains the graph G that we want to classify and has part-of alphabet Π2 = {0, 1} which correspond to the radius of the ego graphs e ∈ S1 of which G is made of. E1 We experimented with SAEN applying the EGNN H-decomposition on all the datasets. For each dataset, we manually chose the parameters of SAEN, i.e. the number of hidden layers for each stratum, the size of each layer and the maximum radius R. We used the Leaky ReLU (Maas et al.) activation function on all the units. We report the chosen parameters in Table A1 of the appendix. In all our experiments we trained the neural networks by using the Adam algorithm to minimize a cross entropy loss. The classification accuracy of SAEN was measured with 10-times 10-fold cross-validation. We manually chose the number of layers and units for each level of the part-of decomposition; the number of epochs was chosen manually for each dataset and we kept the same value for all the 100 runs of the 10-times 10-fold cross-validation. The mean accuracies and their standard deviations obtained by our method are reported in Table 4, where we compare these results with those obtained by Yanardag & Vishwanathan (2015) and by Niepert et al. (2016). Although our method was conceived for social network data, it can also handle other types of graphs. For the sake of completeness in Table 5 we report the mean accuracies obtained with SAEN on the molecule and protein datasets studied in previous works (e.g. Niepert et al. (2016)). E2 In Table 1 we show the file sizes of the preprocessed datasets before and after the compression together with the data compression ratio. 3 We also estimate the benefit of the relational compression from a computational time point of view and report the measurement of the runtime for 1 run with and without compression together with the speedup factor. For the purpose of this experiment, all tests were run on a computer with two 8-cores Intel Xeon E5-2665 processors and 94 GB RAM. Uncompressed datasets which exhausted our server’s memory during the test are marked as “OOM” (out of memory) in the table, while those who exceeded the time limit of 100 times the time needed for the uncompressed version are marked as “TO” (timeout). 4.3 DISCUSSION A1 As shown in Table 4, EGNN performs consistently better than the other two methods on all the social network datasets. This confirms that the chosenH-hierarchical decomposition is effective on this kind of problems. Also the results for molecule and protein datasets (see Table 5) are in line with the current state of the art. A2 The compression algorithm has proven to be effective in improving the computational cost of our method. Most of the datasets improved their runtimes by a factor of at least 4 while maintaining the 3The size of the uncompressed files are shown for the sole purpose of computing the data compression ratio. Indeed the last version of our code compresses the files on the fly. same expressive power. Moreover, experiments on REDDIT-MULTI5K and REDDIT-MULTI12K have only been possible thanks to the size reduction operated by the algorithm as the script exhausted the memory while executing the training step on the uncompressed files. 5 CONCLUSIONS We proposed SAEN, a novel architecture for learning vector representations of H-decompositions of input graphs. We applied SAEN for graph classification on 6 real world social network datasets, outperforming the current state of the art on 4 of them and obtaining state-of-the-art classification accuracy on the others. Another important contribution of this paper is the domain compression algorithm which greatly reduces memory usage and allowed us to speedup the training time of a factor of at least 4. APPENDIX: SHIFT AGGREGATE EXTRACT NETWORKS Francesco Orsini 12 , Daniele Baracchi 2 and Paolo Frasconi 2 1 Department of Computer Science 2 Department of Information Engineering Katholieke Universiteit Leuven Università degli Studi di Firenze Celestijnenlaan 200A Via di Santa Marta 3 3001 Heverlee, Belgium I-50139 Firenze, Italy [email protected] [email protected] [email protected] A PARAMETERS USED IN THE EXPERIMENTS WITH EGNN In Table A1 we report for each dataset: the radiuses r of the neighborhood subgraphs used in the EGNN decomposition and the number of units in the hidden layers for each stratum. Figure A1: Parameters for the neural networks used in the experiments. DATASET RADIUSES HIDDEN UNITS r S0 S1 S2 COLLAB 0, 1 15− 5 5− 2 5− 3 IMDB-BINARY 0, 1, 2 2 5− 2 5− 3− 1 IMDB-MULTI 0, 1, 2 2 5− 2 5− 3 REDDIT-BINARY 0, 1 10− 5 5− 2 5− 3− 1 REDDIT-MULTI5K 0, 1 10 10 6− 5 REDDIT-MULTI12K 0, 1 10 10 20− 11 MUTAG 0, 1, 2, 3 10 5− 5 5− 5− 1 PTC 0, 1 15 15 15− 1 NCI1 0, 1, 2, 3 15 15 15− 10− 1 PROTEINS 0, 1, 2, 3 3− 2 6− 5− 4 6− 3− 1 D&D 0, 1, 2, 3 10 5− 2 5− 3− 1
1. What is the main contribution of the paper in graph classification? 2. How does the proposed method compare to other works in terms of performance, particularly on social networks and bio-informatics datasets? 3. Why does the author question the suitability of the proposed method for social network graphs, despite the claim that it is tailored for such graphs? 4. Are there any concerns regarding the experimental results presented in the paper?
Review
Review the paper proposed a method mainly for graph classification. The proposal is to decompose graphs objects into hierarchies of small graphs followed by generating vector embeddings and aggregation using deep networks. The approach is reasonable and intuitive however, experiments do not show superiority of their approach. The proposed method outperforms Yanardag et al. 2015 and Niepert et al., 2016 on social networks graphs but are quite inferior to Niepert et al., 2016 on bio-informatics datasets. the authors did not report acccuracy for Yanardag et al. 2015 which on similar bio-ddatasets for example NCI1 is 80%, significantly better than achieved by the proposed method. The authors claim that their method is tailored for social networks graph more is not supported by good arguments? what models of graphs is this method more suitable?
ICLR
Title Shift Aggregate Extract Networks Abstract The Shift Aggregate Extract Network (SAEN) is an architecture for learning representations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying shift, aggregate and extract operations on the vector representations of its parts. We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art. 1 INTRODUCTION Many different problems in various fields of science require the classification of structured data, i.e. collections of objects bond together by some kind of relation. A natural way to represent such structures is through graphs, which are able to encode both the individual objects composing the collection (as vertices) and the relationships between them (as edges). A number of approaches to the graph classification problem has been studied in graph kernel and neural network literature. Graph kernels decompose input graphs in substructures such as shortest paths (Borgwardt & Kriegel, 2005), graphlets (Shervashidze et al., 2009) or neighborhood subgraph pairs (Costa & De Grave, 2010). The similarity between two graphs is then computed by comparing the respective sets of parts. Methods based on recursive neural networks unfold a neural network over input graphs and learn vector representations of their nodes employing backpropagation though structure (Goller & Kuchler, 1996). Recursive neural networks have been successfully applied to domains such as natural language (Socher et al., 2011) and biology (Vullo & Frasconi, 2004; Baldi & Pollastri, 2003). An advantage of recursive neural networks over graph kernels, is that the vector representations of the input graphs are learnt rather than handcrafted. Learning on social network data can be considerably hard due to their peculiar structure: as opposed to chemical compounds and parse trees, the structure of social network graphs is highly irregular. Indeed in social networks it is common to have nodes in the same graph whose degree differs by orders of magnitude. This poses a significant challenge for the substructure matching approach used by some graph kernels as the variability in connectivity generates a large number of unique patterns leading to diagonally dominant kernel matrices. We propose Shift Aggregate Extract Networks (SAEN), a neural network architecture for learning representations of input graphs. SAEN decomposes input graphs intoH-hierarchies made of multiple strata of objects. Objects in each stratum are connected by “part-of” relations to the objects to the stratum above. In case we wish to classify graphs we can use an H-hierarchical decomposition in which the top stratum contains the graphG that we want to classify, while the intermediate strata contain subgraphs of G, subgraphs of subgraphs of G and so on, until we reach the bottom stratum which contains the vertices v of G. Unlike R-convolution relations in kernel methods (which decompose objects into the set of their parts), H-hierarchical decompositions are deep as they can represent the parts of the parts of an object. Recursive neural networks associate to the vertices of the input graphs vector representations imposing that they have identical dimensions. Moreover, the propagation follows the edge connectivity and weights are shared over the whole input graph. If we consider that vector representations of nodes (whose number of parents can differ by orders of magnitude) must share the same weights, learning on social network data with recursive neural networks might be nontrivial. SAEN compensates the limitations of recursive neural networks by adding the following degrees of flexibility: 1. the SAEN computation schema unfolds a neural network over H-decompositions instead of the input graph, 2. SAEN imposes weight sharing and fixed size of the learnt vector representations on a per stratum basis instead of globally. Indeed SAEN allows to use vector representations of different sizes for different strata of objects (e.g. graphs, subgraphs, subgraphs of subgraphs, edges, vertices etc.) The SAEN schema computes the vector representation of each object by applying shift, aggregate and extract operations on the vector representations of its parts. Another contribution of this paper is the introduction of a domain compression algorithm, that we use in our experiments to reduce memory usage and runtime. Domain compression collapses objects in the same stratum of an H-hierarchical decomposition into a compressed one whenever these objects are indistinguishable for the SAEN computation schema. In particular objects made of the same sets of parts are indistinguishable. In order obtain a lossless compression an H-hierarchical decomposition we store counts on symmetries adopting some mathematical results from lifted linear programming (Mladenov et al., 2012). The domain compression algorithm is also reminiscent of the work of Sperduti & Starita (1997) in which common substructures of recursive neural networks are collapsed in order to reduce the computational cost. 2 SHIFT-AGGREGATE-EXTRACT NEURAL NETWORKS We propose a neural network architecture that takes as input an undirected attributed graph G = (V,E,X) where V is the vertex set, E ⊆ V × V is the edge set, and X = {xv ∈ Rp}v∈V is a set of p-dimensional vertex attributes. When vertices do not have associated attributes (for example this happens in some of the social network datasets of § 4.1), we can set xv to some vertex invariant such as node centrality or betweenness. 2.1 H-HIERARCHICAL DECOMPOSITIONS Most graph kernels decompose graphs into parts by using an R-convolution relation (Haussler, 1999). We extend this approach by decomposing graphs into a hierarchy of π-parametrized “part of” relations. Formally, anH-hierarchical decomposition is a pair ({Sl}Ll=0, {Rl,π}Ll=1) where: • {Sl}Ll=0 are disjoint sets of objects Sl called strata, or levels of the hierarchy. The bottom stratum S0 contains non-decomposable objects (e.g. individual vertices), while the other strata Sl, l = 1, . . . , L contain composite objects, oi ∈ Sl, whose parts oj ∈ Sl−1 belong to the preceding stratum, Sl−1. • {Rl,π}Ll=1 is a set of l, π-parametrized Rl,π-convolution relations. A pair (oi, oj) ∈ Sl × Sl−1 belongs toRl,π iff “oj is part of oi with membership type π”. For notational convenience, the parts of oi are denoted asR−1l,π(oi) = {oj |(oj , oi) ∈ Rl,π}. The membership type π is used to represent the roles of the parts of an object. For example, we could decompose a graph as a multiset of π-neighborhood subgraphs 1 in which π is the radius of the neighborhoods (see Figure 1 on the left). Another possible use of the π membership type is to 1The r-neighborhood subgraph (or ego graph) of a vertex v in a graph G is the induced subgraph of G consisting of all vertices whose shortest-path distance from v is at most r. distinguish the root from the other vertices in a rooted neighborhood subgraph (see Figure 1 on the right). An H-hierarchical decomposition is a multilevel generalization of R-convolution relations, and it reduces to anR-convolution relation for L = 1. 2.2 SHIFT AGGREGATE EXTRACT SCHEMA FOR LEARNING REPRESENTATIONS We propose Shift Aggregate Extract Network (SAEN) to learn vector representations for all the objects of all the strata {Sl}Ll=0 in an H-hierarchical decomposition. SAEN unfolds a neural network architecture over anH-hierarchical decomposition by using the Shift Aggregate Extract (SAE) schema. According to the SAE schema the vector representation of each object in theH-hierarchical decomposition is either computed by applying a neural network on the vertex attributes (for the objects in bottom stratum) or defined in terms of the vector representations of its parts (for the other objects). More formally, the SAE schema associates a dl-dimensional representation hi ∈ Rdl to each object oi ∈ Sl of theH-hierarchical decomposition according to the following formula: hi = f0(xvi ; Θ0) if oi ∈ S0 fl ( ∑ π∈Πl ∑ oj∈R−1l,π(oi) (zπ ⊗ hj)︸ ︷︷ ︸ Shift︸ ︷︷ ︸ Aggregate ; Θl ) ︸ ︷︷ ︸ Extract otherwise (1) where fl(·; Θl), l = 0, . . . , L are multilayer neural networks with parameters Θl. With respect to the base case (first branch of Eq. 1) we have that each object oi in the bottom stratum S0 is in one-to-one correspondence with the vertices vi ∈ V of the graph that we are decomposing. Indeed the vector representations hi are computed by evaluating f0(·; Θ0) in correspondence of the vertex attributes xvi ∈ X . The recursion step (second branch of Eq. 1) follows the Shift Aggregate Extract (SAE) schema: • Shift: each part representation hj ∈ Rdl−1 is remapped into a space R|Πldl−1| made of |Πl| slots, where each slot has dimension dl−1. This transformation shifts part representations hj by using the Kronecker product ⊗ between an indicator vector zπ ∈ R|Πl| and the vector representation hj of part oj ∈ Sl−1. The indicator vector zπ ∈ R|Πl| defined as zi = { 1 if i=π 0 otherwise. and it is used to make sure that vector representations hj of object parts will fall in the same slot if and only if they have the same membership type π. • Aggregate: the shifted representations (zπ ⊗ hj) of the parts oj are then aggregated with a sum. • Extract: the aggregated representation is compressed to a dl-dimensional space by a Θlparametrized nonlinear map fl(·,Θl) : R|Πldl−1| → Rdl implemented with a multilayer neural network. The shift and aggregate steps, that we have seen so far, are identical to those used in kernel design when computing the explicit feature of a kernel k(x, z) derived from a sum ∑ π∈Π kπ(x, z) of base kernels kπ(x, z), π ∈ Π. In principle, it would be indeed possible to turn SAEN into a kernel method by removing the extraction step E from the SAE schema. However, such an approach would increase the dimensionality of the feature space by a multiplicative factor |Πl| for each level l of the Hhierarchical decomposition, thus leading to an exponential number of features. When using SAEN, the feature space growth is prevented by exploiting a distributed representation (via a multilayered neural network) during the E step of the SAE schema. As a result, SAEN can easily cope with Hhierarchical decompositions consisting of multiple strata. 2.3 EXPLOITING SYMMETRIES FOR DOMAIN COMPRESSION In this section we propose a technique, called domain compression, which allows to save memory and speedup the SAEN computation. Domain compression exploits symmetries inH-hierarchical decompositions by collapsing equivalent objects in each stratum. The greater the number of collapsed objects the highest the compression ratio. Two objects a, b in a stratum Sl are collapsable a ∼ b if they share the same representation (i.e. ha = hb) for all the possible values of Θl. A compressed stratum S comp l is the quotient set Sl/∼ of stratum Sl w.r.t. the collapsibility relation ∼. We assume that the attributes of the elements in the bottom stratum S0 are categorical, so that the same vector representation can be shared by multiple elements with non-zero probability. 2 While objects in the bottom stratum S0 are collapsable when their attributes are identical, for all the other strata Sl, l = 1, . . . , L, objects are collapsable if they are made by the same sets of parts for all the membership types π. In Figure 2 we provide a pictorial representation of the domain compression of an H-hierarchical decomposition (EGNN, described in § 4.2). On the left we show the H-hierarchical decomposition of a graph taken from the IMDB-BINARY dataset (see § 4.1) together with its compressed version on the right. 2.3.1 DOMAIN COMPRESSION ALGORITHM In order to compress H-hierarchical decompositions we adapt the lifted linear programming technique proposed by Mladenov et al. (2012) to the SAEN architecture. If a matrix M ∈ Rn×p has 2 Vectors of real valued attributes could be discretized using clustering techniques. However, we leave discretization in SAEN to future works. m ≤ n distinct rows it can be decomposed as the product DM comp where M comp is a compressed version of M in which the distinct rows of M appear exactly once. The Boolean decompression matrix, D, encodes the collapsibility relation among the rows of M so that Dij = 1 iff the ith row of M falls in the equivalence class j of ∼. A pseudo-inverse C of D can be computed by dividing the rows of D> by their sum (where D> is the transpose of D). Example 1 If we look at matrix M in Eq. 2 we notice that row 1 and 4 share the encoding [0, 0, 0], rows 3 and 5 share the encoding [1, 1, 0] while the encoding [1, 0, 1] appears only once at row 2. Matrix M comp is the compressed version of M . M = 0 0 0 1 0 1 1 1 0 0 0 0 1 1 0 M comp = [ 0 0 0 1 0 1 1 1 0 ] D = 1 0 0 0 1 0 0 0 1 1 0 0 0 0 1 C = [ 1/2 0 0 1/2 0 0 1 0 0 0 0 0 1/2 0 1/2 ] (2) Matrix M can be expressed as the matrix product between the decompression matrix D and the compressed version of M comp (i.e. M = DM comp), while the matrix multiplication between the compression matrix C and the M leads to the compressed matrix M comp (i.e.M comp = CM ). To apply domain compression we rewrite Eq. 1 in matrix form as follows: Hl = f0(X; Θ0)︸ ︷︷ ︸ |S0|×d0 if l = 0 fl [ Rl,1, . . . , Rl,π, . . . , Rl,|Πl| ] ︸ ︷︷ ︸ |Sl|×|Πl||Sl−1| Hl−1 . . . 0 ... . . . ... 0 . . . Hl−1 ︸ ︷︷ ︸ |Πl||Sl−1|×|Πl|dl−1 ; Θl ︸ ︷︷ ︸ |Sl|×dl otherwise (3) where: • Hl ∈ R|Sl|×dl is the matrix that represents the dl-dimensional encodings of the objects in Sl. The rows of Hl are the vector representations hi in Eq. 1, while the rows of Hl−1 are the vector representations hj in Eq. 1; • X ∈ R|S0|×p is the matrix that represents the p-dimensional encodings of the vertex attributes in V (i.e. the rows of X are the xvi of Eq. 1); • fl(·; Θl) is unchanged w.r.t. Eq. 1 and is applied to its input matrices row-wise; • Rl,π ∈ R|Sl|×|Sl−1| ∀π ∈ Πl are the matrix representations of the Rl,π-convolution relations of Eq. 1 whose elements are (Rl,π)ij = 1 if (oj , oi) ∈ Rl,π and 0 otherwise. Domain compression on Eq. 3 is performed by the DOMAIN-COMPRESSION procedure (see Algorithm 3) that takes as input the attribute matrix X and the part-of matrices Rl,π and returns their compressed versions Xcomp and the Rcompl,π respectively. The algorithm starts by invoking (line 1) the procedure COMPUTE-CD on X to obtain the compression and decompression matrices C0 and D0 respectively. The compression matrix C0 is used to compress X (line 2) then we start iterating over the levels l = 0, . . . , L of the H-hierarchical decomposition (line 4) and compress the Rl,π matrices. The compression of the Rl,π matrices is done by right-multiplying them by the decompression matrixDl−1 of the previous level l−1 (line 5). In this way we collapse the parts of relation Rl,π (i.e. the columns of Rl,π) as these were identified in stratum Sl−1 as identical objects (i.e. those objects corresponding to the rows of X or Rl−1,π collapsed during the previous step). The result is a list Rcol comp = [Rl,πDl−1, ∀π = 1, . . . , |Πl|] of column compressed Rl,π−matrices. We proceed collapsing equivalent objects in stratum Sl, i.e. those made of identical sets of parts: we find symmetries in Rcol comp by invoking COMPUTE-CD (line 6) and obtain a new pair Cl, Dl of compression, and decompression matrices respectively. Finally the compression matrix Cl is applied to the column-compressed matrices inRcol comp in order to obtain the Πl compressed matrices DOMAIN-COMPRESSION(X,R) 1 C0, D0 = COMPUTE-CD(X) 2 Xcomp = C0X // Compress the X matrix. 3 Rcomp = {} // Initialize an empty container for compressed matrices. 4 for l = 1 to L 5 Rcol comp = [Rl,πDl−1, ∀π = 1, . . . , |Πl|] // column compression 6 Cl, Dl = COMPUTE-CD(Rcol comp) 7 for π = 1 to |Πl| 8 Rcompl,π = ClR col comp π // row compression 9 return Xcomp, Rcomp Figure 3: DOMAIN-COMPRESSION of stratum Sl (line 8). Algorithm 3 allows us to compute the domain compressed version of Eq. 3 which can be obtained by replacing: X with Xcomp = C0X , Rl,π with R comp l,π = ClRl,πDl−1 and Hl with H comp l . Willing to recover the original encodings Hl we just need to employ the decompression matrix Dl on the compressed encodings H comp l , indeed Hl = DlH comp l . As we can see by substituting Sl with S comp l , the more are the symmetries (i.e. when |S comp l | |Sl|) the greater the domain compression will be. 3 RELATED WORKS When learning with graph inputs two fundamental design aspects that must be taken into account are: the choice of the pattern generator and the choice of the matching operator. The former decomposes the graph input in substructures while the latter allows to compare the substructures. Among the patterns considered from the graph kernel literature we have paths, shortest paths, walks (Kashima et al., 2003), subtrees (Ramon & Gärtner, 2003; Shervashidze et al., 2011) and neighborhood subgraphs (Costa & De Grave, 2010). The similarity between graphs G and G′ is computed by counting the number of matches between their common the substructures (i.e. a kernel on the sets of the substructures). The match between two substructures can be defined by using graph isomorphism or some other weaker graph invariant. When the number of substructures to enumerate is infinite or exponential with the size of the graph (perhaps this is the case for random walks and shortest paths respectively) the kernel between the two graphs is computed without generating an explicit feature map. Learning with an implicit feature map is not scalable as it has a space complexity quadratic in the number of training examples (because we need to store in memory the gram matrix). Other graph kernels such as the Weisfeiler-Lehman Subtree Kernel (WLST) (Shervashidze et al., 2011) and the Neighborhood Subgraph Pairwise Distance Kernel (NSPDK) (Costa & De Grave, 2010) deliberately choose a pattern generator that scales polynomially and produces an explicit feature map. However the vector representations produced by WLST and NSPDK are handcrafted and not learned. A recent work by Yanardag & Vishwanathan (2015) proposes to uses pattern generators such as graphlets, shortest paths and WLST subtrees to transform input graphs into documents. The generated substructures are then treated as words and embedded in the Euclidean space with a CBOW or a Skip-gram model. The deep upgrade of existing graph kernels is performed by reweighing the counts of the substructures by the square root of their word-vector self similarity. Another recent work by Niepert et al. (2016) upgrades the convolutional neural networks CNNs for images to graphs. While the receptive field of a CNN is usually a square window (Niepert et al., 2016) employ neighborhood subgraphs as receptive fields. As nodes in graphs do not have a specific temporal or spatial order, (Niepert et al., 2016) employ vertex invariants to impose an order on the nodes of the subgraphs/receptive fields. 4 EXPERIMENTAL EVALUATION We answer to the following experimental questions: Q1 How does SAEN compare to the state of the art? Q2 Can SAEN exploit symmetries in social networks to reduce the memory usage and the runtime? 4.1 DATASETS In order to answer the experimental questions we tested our method on six publicly available datasets first proposed by Yanardag & Vishwanathan (2015). • COLLAB is a dataset where each graph represent the ego-network of a researcher, and the task is to determine the field of study of the researcher between High Energy Physics, Condensed Matter Physics and Astro Physics. • IMDB-BINARY, IMDB-MULTI are datasets derived from IMDB where in each graph the vertices represent actors/actresses and the edges connect people which have performed in the same movie. Collaboration graphs are generated from movies belonging to genres Action and Romance for IMDB-BINARYand Comedy, Romance and Sci-Fi for IMDB-MULTI, and for each actor/actress in those genres an ego-graph is extracted. The task is to identify the genre from which the ego-graph has been generated. • REDDIT-BINARY, REDDIT-MULTI5K, REDDIT-MULTI12K are datasets where each graph is derived from a discussion thread from Reddit. In those datasets each vertex represent a distinct user and two users are connected by an edge if one of them has responded to a post of the other in that discussion. The task in REDDIT-BINARYis to discriminate between threads originating from a discussion-based subreddit (TrollXChromosomes, atheism) or from a question/answers-based subreddit (IAmA, AskReddit). The task in REDDIT-MULTI5Kand REDDIT-MULTI12Kis a multiclass classification problem where each graph is labeled with the subreddit where it has originated (worldnews, videos, AdviceAnimals, aww, mildlyinteresting for REDDIT-MULTI5Kand AskReddit, AdviceAnimals, atheism, aww, IAmA, mildlyinteresting, Showerthoughts, videos, todayilearned, worldnews, TrollXChromosomes for REDDIT-MULTI12K). 4.2 EXPERIMENTS In our experiments we chose an H-hierarchical decomposition called Ego Graph Neural Network (EGNN), that mimics the graph kernel NSPDK with the distance parameter set to 0. Before applying EGNN we turn unattributed graphs (V,E) into attributed graphs (V,E,X) by annotating their vertices v ∈ V with attributes xv ∈ X . We label vertices v of G with their degree and encode this information into the attributes xv by employing the 1-hot encoding. EGNN decomposes attributed graphs G = (V,E,X) into a 3 level H-hierarchical decomposition with the following strata (see Figure 1 for a pictorial representation of EGNN): • stratum S0 contains objects ov that are in one-to-one correspondence with the vertices v ∈ V . • stratum S1 contains vroot-rooted r-neighborhood subgraphs (i.e. ego graphs) e = (vroot, Ve, Ee) of radius r = 0, 1, . . . , R and has part-of alphabet Π1 = {ROOT, ELEM}. Objects ov ∈ S0 are “ELEM-part-of” ego graph e if v ∈ Ve \ {vroot}, while the are “ROOT-part-of” ego graph e if v = vroot. • stratum S2 contains the graph G that we want to classify and has part-of alphabet Π2 = {0, 1} which correspond to the radius of the ego graphs e ∈ S1 of which G is made of. E1 We experimented with SAEN applying the EGNN H-decomposition on all the datasets. For each dataset, we manually chose the parameters of SAEN, i.e. the number of hidden layers for each stratum, the size of each layer and the maximum radius R. We used the Leaky ReLU (Maas et al.) activation function on all the units. We report the chosen parameters in Table A1 of the appendix. In all our experiments we trained the neural networks by using the Adam algorithm to minimize a cross entropy loss. The classification accuracy of SAEN was measured with 10-times 10-fold cross-validation. We manually chose the number of layers and units for each level of the part-of decomposition; the number of epochs was chosen manually for each dataset and we kept the same value for all the 100 runs of the 10-times 10-fold cross-validation. The mean accuracies and their standard deviations obtained by our method are reported in Table 4, where we compare these results with those obtained by Yanardag & Vishwanathan (2015) and by Niepert et al. (2016). Although our method was conceived for social network data, it can also handle other types of graphs. For the sake of completeness in Table 5 we report the mean accuracies obtained with SAEN on the molecule and protein datasets studied in previous works (e.g. Niepert et al. (2016)). E2 In Table 1 we show the file sizes of the preprocessed datasets before and after the compression together with the data compression ratio. 3 We also estimate the benefit of the relational compression from a computational time point of view and report the measurement of the runtime for 1 run with and without compression together with the speedup factor. For the purpose of this experiment, all tests were run on a computer with two 8-cores Intel Xeon E5-2665 processors and 94 GB RAM. Uncompressed datasets which exhausted our server’s memory during the test are marked as “OOM” (out of memory) in the table, while those who exceeded the time limit of 100 times the time needed for the uncompressed version are marked as “TO” (timeout). 4.3 DISCUSSION A1 As shown in Table 4, EGNN performs consistently better than the other two methods on all the social network datasets. This confirms that the chosenH-hierarchical decomposition is effective on this kind of problems. Also the results for molecule and protein datasets (see Table 5) are in line with the current state of the art. A2 The compression algorithm has proven to be effective in improving the computational cost of our method. Most of the datasets improved their runtimes by a factor of at least 4 while maintaining the 3The size of the uncompressed files are shown for the sole purpose of computing the data compression ratio. Indeed the last version of our code compresses the files on the fly. same expressive power. Moreover, experiments on REDDIT-MULTI5K and REDDIT-MULTI12K have only been possible thanks to the size reduction operated by the algorithm as the script exhausted the memory while executing the training step on the uncompressed files. 5 CONCLUSIONS We proposed SAEN, a novel architecture for learning vector representations of H-decompositions of input graphs. We applied SAEN for graph classification on 6 real world social network datasets, outperforming the current state of the art on 4 of them and obtaining state-of-the-art classification accuracy on the others. Another important contribution of this paper is the domain compression algorithm which greatly reduces memory usage and allowed us to speedup the training time of a factor of at least 4. APPENDIX: SHIFT AGGREGATE EXTRACT NETWORKS Francesco Orsini 12 , Daniele Baracchi 2 and Paolo Frasconi 2 1 Department of Computer Science 2 Department of Information Engineering Katholieke Universiteit Leuven Università degli Studi di Firenze Celestijnenlaan 200A Via di Santa Marta 3 3001 Heverlee, Belgium I-50139 Firenze, Italy [email protected] [email protected] [email protected] A PARAMETERS USED IN THE EXPERIMENTS WITH EGNN In Table A1 we report for each dataset: the radiuses r of the neighborhood subgraphs used in the EGNN decomposition and the number of units in the hidden layers for each stratum. Figure A1: Parameters for the neural networks used in the experiments. DATASET RADIUSES HIDDEN UNITS r S0 S1 S2 COLLAB 0, 1 15− 5 5− 2 5− 3 IMDB-BINARY 0, 1, 2 2 5− 2 5− 3− 1 IMDB-MULTI 0, 1, 2 2 5− 2 5− 3 REDDIT-BINARY 0, 1 10− 5 5− 2 5− 3− 1 REDDIT-MULTI5K 0, 1 10 10 6− 5 REDDIT-MULTI12K 0, 1 10 10 20− 11 MUTAG 0, 1, 2, 3 10 5− 5 5− 5− 1 PTC 0, 1 15 15 15− 1 NCI1 0, 1, 2, 3 15 15 15− 10− 1 PROTEINS 0, 1, 2, 3 3− 2 6− 5− 4 6− 3− 1 D&D 0, 1, 2, 3 10 5− 2 5− 3− 1
1. Can you provide more information about the proposed method for using neural networks on graph-structured data? 2. How does the approach construct hierarchical sets of "objects" within the graph? 3. Can you explain how the objects' representations are constructed, specifically how the scheme works? 4. What is the motivation behind choosing the "ego-graph" representation? 5. What are the dimensionalities of the representations used at each layer? 6. How is final classification performed? 7. Were other choices of decomposition/object-part structures investigated? 8. Why were one-hot degrees chosen as the initial attributes? 9. How many layers and hidden units were used in the multi-layer neural net? 10. Can you provide examples that illustrate how the proposed approach works?
Review
Review The paper contributes to recent work investigating how neural networks can be used on graph-structured data. As far as I can tell, the proposed approach is the following: 1. Construct a hierarchical set of "objects" within the graph. Each object consists of multiple "parts" from the set of objects in the level below. There are potentially different ways a part can be part of an object (the different \pi labels), which I would maybe call "membership types". In the experiments, the objects at the bottom level are vertices. At the next level they are radius 0 (just a vertex?) and radius 1 neighborhoods around each vertex, and the membership types here are either "root", or "element" (depending on whether a vertex is the center of the neighborhood or a neighbor). At the top level there is one object consisting of all of these neighborhoods, with membership types of "radius 0 neighborhood" (isn't this still just a vertex?) or "radius 1 neighborhood". 2. Every object has a representation. Each vertex's representation is a one-hot encoding of its degree. To construct an object's representation at the next level, the following scheme is employed: a. For each object, sum the representation of all of its parts having the same membership type. b. Concatenate the sums obtained from different membership types. c. Pass this vector through a multi-layer neural net. I've provided this summary mainly because the description in the paper itself is somewhat hard to follow, and relevant details are scattered throughout the text, so I'd like to verify that my understanding is correct. Some additional questions I have that weren't clear from the text: how many layers and hidden units were used? What are the dimensionalities of the representations used at each layer? How is final classification performed? What is the motivation for the chosen "ego-graph" representation? The proposed approach is interesting and novel, the compression technique appears effective, and the results seem compelling. However, the clarity and structure of the writing is quite poor. It took me a while to figure out what was going on---the initial description is provided without any illustrative examples, and it required jumping around the paper to figure for example how the \pi labels are actually used. Important details around network architecture aren't provided, and very little in the way of motivation is given for many of the choices made. Were other choices of decomposition/object-part structures investigated, given the generality of the shift-aggregate-extract formulation? What motivated the choice of "ego-graphs"? Why one-hot degrees for the initial attributes? Overall, I think the paper contains a useful contribution on a technical level, but the presentation needs to be significantly cleaned up before I can recommend acceptance.
ICLR
Title Shift Aggregate Extract Networks Abstract The Shift Aggregate Extract Network (SAEN) is an architecture for learning representations on social network data. SAEN decomposes input graphs into hierarchies made of multiple strata of objects. Vector representations of each object are learnt by applying shift, aggregate and extract operations on the vector representations of its parts. We propose an algorithm for domain compression which takes advantage of symmetries in hierarchical decompositions to reduce the memory usage and obtain significant speedups. Our method is empirically evaluated on real world social network datasets, outperforming the current state of the art. 1 INTRODUCTION Many different problems in various fields of science require the classification of structured data, i.e. collections of objects bond together by some kind of relation. A natural way to represent such structures is through graphs, which are able to encode both the individual objects composing the collection (as vertices) and the relationships between them (as edges). A number of approaches to the graph classification problem has been studied in graph kernel and neural network literature. Graph kernels decompose input graphs in substructures such as shortest paths (Borgwardt & Kriegel, 2005), graphlets (Shervashidze et al., 2009) or neighborhood subgraph pairs (Costa & De Grave, 2010). The similarity between two graphs is then computed by comparing the respective sets of parts. Methods based on recursive neural networks unfold a neural network over input graphs and learn vector representations of their nodes employing backpropagation though structure (Goller & Kuchler, 1996). Recursive neural networks have been successfully applied to domains such as natural language (Socher et al., 2011) and biology (Vullo & Frasconi, 2004; Baldi & Pollastri, 2003). An advantage of recursive neural networks over graph kernels, is that the vector representations of the input graphs are learnt rather than handcrafted. Learning on social network data can be considerably hard due to their peculiar structure: as opposed to chemical compounds and parse trees, the structure of social network graphs is highly irregular. Indeed in social networks it is common to have nodes in the same graph whose degree differs by orders of magnitude. This poses a significant challenge for the substructure matching approach used by some graph kernels as the variability in connectivity generates a large number of unique patterns leading to diagonally dominant kernel matrices. We propose Shift Aggregate Extract Networks (SAEN), a neural network architecture for learning representations of input graphs. SAEN decomposes input graphs intoH-hierarchies made of multiple strata of objects. Objects in each stratum are connected by “part-of” relations to the objects to the stratum above. In case we wish to classify graphs we can use an H-hierarchical decomposition in which the top stratum contains the graphG that we want to classify, while the intermediate strata contain subgraphs of G, subgraphs of subgraphs of G and so on, until we reach the bottom stratum which contains the vertices v of G. Unlike R-convolution relations in kernel methods (which decompose objects into the set of their parts), H-hierarchical decompositions are deep as they can represent the parts of the parts of an object. Recursive neural networks associate to the vertices of the input graphs vector representations imposing that they have identical dimensions. Moreover, the propagation follows the edge connectivity and weights are shared over the whole input graph. If we consider that vector representations of nodes (whose number of parents can differ by orders of magnitude) must share the same weights, learning on social network data with recursive neural networks might be nontrivial. SAEN compensates the limitations of recursive neural networks by adding the following degrees of flexibility: 1. the SAEN computation schema unfolds a neural network over H-decompositions instead of the input graph, 2. SAEN imposes weight sharing and fixed size of the learnt vector representations on a per stratum basis instead of globally. Indeed SAEN allows to use vector representations of different sizes for different strata of objects (e.g. graphs, subgraphs, subgraphs of subgraphs, edges, vertices etc.) The SAEN schema computes the vector representation of each object by applying shift, aggregate and extract operations on the vector representations of its parts. Another contribution of this paper is the introduction of a domain compression algorithm, that we use in our experiments to reduce memory usage and runtime. Domain compression collapses objects in the same stratum of an H-hierarchical decomposition into a compressed one whenever these objects are indistinguishable for the SAEN computation schema. In particular objects made of the same sets of parts are indistinguishable. In order obtain a lossless compression an H-hierarchical decomposition we store counts on symmetries adopting some mathematical results from lifted linear programming (Mladenov et al., 2012). The domain compression algorithm is also reminiscent of the work of Sperduti & Starita (1997) in which common substructures of recursive neural networks are collapsed in order to reduce the computational cost. 2 SHIFT-AGGREGATE-EXTRACT NEURAL NETWORKS We propose a neural network architecture that takes as input an undirected attributed graph G = (V,E,X) where V is the vertex set, E ⊆ V × V is the edge set, and X = {xv ∈ Rp}v∈V is a set of p-dimensional vertex attributes. When vertices do not have associated attributes (for example this happens in some of the social network datasets of § 4.1), we can set xv to some vertex invariant such as node centrality or betweenness. 2.1 H-HIERARCHICAL DECOMPOSITIONS Most graph kernels decompose graphs into parts by using an R-convolution relation (Haussler, 1999). We extend this approach by decomposing graphs into a hierarchy of π-parametrized “part of” relations. Formally, anH-hierarchical decomposition is a pair ({Sl}Ll=0, {Rl,π}Ll=1) where: • {Sl}Ll=0 are disjoint sets of objects Sl called strata, or levels of the hierarchy. The bottom stratum S0 contains non-decomposable objects (e.g. individual vertices), while the other strata Sl, l = 1, . . . , L contain composite objects, oi ∈ Sl, whose parts oj ∈ Sl−1 belong to the preceding stratum, Sl−1. • {Rl,π}Ll=1 is a set of l, π-parametrized Rl,π-convolution relations. A pair (oi, oj) ∈ Sl × Sl−1 belongs toRl,π iff “oj is part of oi with membership type π”. For notational convenience, the parts of oi are denoted asR−1l,π(oi) = {oj |(oj , oi) ∈ Rl,π}. The membership type π is used to represent the roles of the parts of an object. For example, we could decompose a graph as a multiset of π-neighborhood subgraphs 1 in which π is the radius of the neighborhoods (see Figure 1 on the left). Another possible use of the π membership type is to 1The r-neighborhood subgraph (or ego graph) of a vertex v in a graph G is the induced subgraph of G consisting of all vertices whose shortest-path distance from v is at most r. distinguish the root from the other vertices in a rooted neighborhood subgraph (see Figure 1 on the right). An H-hierarchical decomposition is a multilevel generalization of R-convolution relations, and it reduces to anR-convolution relation for L = 1. 2.2 SHIFT AGGREGATE EXTRACT SCHEMA FOR LEARNING REPRESENTATIONS We propose Shift Aggregate Extract Network (SAEN) to learn vector representations for all the objects of all the strata {Sl}Ll=0 in an H-hierarchical decomposition. SAEN unfolds a neural network architecture over anH-hierarchical decomposition by using the Shift Aggregate Extract (SAE) schema. According to the SAE schema the vector representation of each object in theH-hierarchical decomposition is either computed by applying a neural network on the vertex attributes (for the objects in bottom stratum) or defined in terms of the vector representations of its parts (for the other objects). More formally, the SAE schema associates a dl-dimensional representation hi ∈ Rdl to each object oi ∈ Sl of theH-hierarchical decomposition according to the following formula: hi = f0(xvi ; Θ0) if oi ∈ S0 fl ( ∑ π∈Πl ∑ oj∈R−1l,π(oi) (zπ ⊗ hj)︸ ︷︷ ︸ Shift︸ ︷︷ ︸ Aggregate ; Θl ) ︸ ︷︷ ︸ Extract otherwise (1) where fl(·; Θl), l = 0, . . . , L are multilayer neural networks with parameters Θl. With respect to the base case (first branch of Eq. 1) we have that each object oi in the bottom stratum S0 is in one-to-one correspondence with the vertices vi ∈ V of the graph that we are decomposing. Indeed the vector representations hi are computed by evaluating f0(·; Θ0) in correspondence of the vertex attributes xvi ∈ X . The recursion step (second branch of Eq. 1) follows the Shift Aggregate Extract (SAE) schema: • Shift: each part representation hj ∈ Rdl−1 is remapped into a space R|Πldl−1| made of |Πl| slots, where each slot has dimension dl−1. This transformation shifts part representations hj by using the Kronecker product ⊗ between an indicator vector zπ ∈ R|Πl| and the vector representation hj of part oj ∈ Sl−1. The indicator vector zπ ∈ R|Πl| defined as zi = { 1 if i=π 0 otherwise. and it is used to make sure that vector representations hj of object parts will fall in the same slot if and only if they have the same membership type π. • Aggregate: the shifted representations (zπ ⊗ hj) of the parts oj are then aggregated with a sum. • Extract: the aggregated representation is compressed to a dl-dimensional space by a Θlparametrized nonlinear map fl(·,Θl) : R|Πldl−1| → Rdl implemented with a multilayer neural network. The shift and aggregate steps, that we have seen so far, are identical to those used in kernel design when computing the explicit feature of a kernel k(x, z) derived from a sum ∑ π∈Π kπ(x, z) of base kernels kπ(x, z), π ∈ Π. In principle, it would be indeed possible to turn SAEN into a kernel method by removing the extraction step E from the SAE schema. However, such an approach would increase the dimensionality of the feature space by a multiplicative factor |Πl| for each level l of the Hhierarchical decomposition, thus leading to an exponential number of features. When using SAEN, the feature space growth is prevented by exploiting a distributed representation (via a multilayered neural network) during the E step of the SAE schema. As a result, SAEN can easily cope with Hhierarchical decompositions consisting of multiple strata. 2.3 EXPLOITING SYMMETRIES FOR DOMAIN COMPRESSION In this section we propose a technique, called domain compression, which allows to save memory and speedup the SAEN computation. Domain compression exploits symmetries inH-hierarchical decompositions by collapsing equivalent objects in each stratum. The greater the number of collapsed objects the highest the compression ratio. Two objects a, b in a stratum Sl are collapsable a ∼ b if they share the same representation (i.e. ha = hb) for all the possible values of Θl. A compressed stratum S comp l is the quotient set Sl/∼ of stratum Sl w.r.t. the collapsibility relation ∼. We assume that the attributes of the elements in the bottom stratum S0 are categorical, so that the same vector representation can be shared by multiple elements with non-zero probability. 2 While objects in the bottom stratum S0 are collapsable when their attributes are identical, for all the other strata Sl, l = 1, . . . , L, objects are collapsable if they are made by the same sets of parts for all the membership types π. In Figure 2 we provide a pictorial representation of the domain compression of an H-hierarchical decomposition (EGNN, described in § 4.2). On the left we show the H-hierarchical decomposition of a graph taken from the IMDB-BINARY dataset (see § 4.1) together with its compressed version on the right. 2.3.1 DOMAIN COMPRESSION ALGORITHM In order to compress H-hierarchical decompositions we adapt the lifted linear programming technique proposed by Mladenov et al. (2012) to the SAEN architecture. If a matrix M ∈ Rn×p has 2 Vectors of real valued attributes could be discretized using clustering techniques. However, we leave discretization in SAEN to future works. m ≤ n distinct rows it can be decomposed as the product DM comp where M comp is a compressed version of M in which the distinct rows of M appear exactly once. The Boolean decompression matrix, D, encodes the collapsibility relation among the rows of M so that Dij = 1 iff the ith row of M falls in the equivalence class j of ∼. A pseudo-inverse C of D can be computed by dividing the rows of D> by their sum (where D> is the transpose of D). Example 1 If we look at matrix M in Eq. 2 we notice that row 1 and 4 share the encoding [0, 0, 0], rows 3 and 5 share the encoding [1, 1, 0] while the encoding [1, 0, 1] appears only once at row 2. Matrix M comp is the compressed version of M . M = 0 0 0 1 0 1 1 1 0 0 0 0 1 1 0 M comp = [ 0 0 0 1 0 1 1 1 0 ] D = 1 0 0 0 1 0 0 0 1 1 0 0 0 0 1 C = [ 1/2 0 0 1/2 0 0 1 0 0 0 0 0 1/2 0 1/2 ] (2) Matrix M can be expressed as the matrix product between the decompression matrix D and the compressed version of M comp (i.e. M = DM comp), while the matrix multiplication between the compression matrix C and the M leads to the compressed matrix M comp (i.e.M comp = CM ). To apply domain compression we rewrite Eq. 1 in matrix form as follows: Hl = f0(X; Θ0)︸ ︷︷ ︸ |S0|×d0 if l = 0 fl [ Rl,1, . . . , Rl,π, . . . , Rl,|Πl| ] ︸ ︷︷ ︸ |Sl|×|Πl||Sl−1| Hl−1 . . . 0 ... . . . ... 0 . . . Hl−1 ︸ ︷︷ ︸ |Πl||Sl−1|×|Πl|dl−1 ; Θl ︸ ︷︷ ︸ |Sl|×dl otherwise (3) where: • Hl ∈ R|Sl|×dl is the matrix that represents the dl-dimensional encodings of the objects in Sl. The rows of Hl are the vector representations hi in Eq. 1, while the rows of Hl−1 are the vector representations hj in Eq. 1; • X ∈ R|S0|×p is the matrix that represents the p-dimensional encodings of the vertex attributes in V (i.e. the rows of X are the xvi of Eq. 1); • fl(·; Θl) is unchanged w.r.t. Eq. 1 and is applied to its input matrices row-wise; • Rl,π ∈ R|Sl|×|Sl−1| ∀π ∈ Πl are the matrix representations of the Rl,π-convolution relations of Eq. 1 whose elements are (Rl,π)ij = 1 if (oj , oi) ∈ Rl,π and 0 otherwise. Domain compression on Eq. 3 is performed by the DOMAIN-COMPRESSION procedure (see Algorithm 3) that takes as input the attribute matrix X and the part-of matrices Rl,π and returns their compressed versions Xcomp and the Rcompl,π respectively. The algorithm starts by invoking (line 1) the procedure COMPUTE-CD on X to obtain the compression and decompression matrices C0 and D0 respectively. The compression matrix C0 is used to compress X (line 2) then we start iterating over the levels l = 0, . . . , L of the H-hierarchical decomposition (line 4) and compress the Rl,π matrices. The compression of the Rl,π matrices is done by right-multiplying them by the decompression matrixDl−1 of the previous level l−1 (line 5). In this way we collapse the parts of relation Rl,π (i.e. the columns of Rl,π) as these were identified in stratum Sl−1 as identical objects (i.e. those objects corresponding to the rows of X or Rl−1,π collapsed during the previous step). The result is a list Rcol comp = [Rl,πDl−1, ∀π = 1, . . . , |Πl|] of column compressed Rl,π−matrices. We proceed collapsing equivalent objects in stratum Sl, i.e. those made of identical sets of parts: we find symmetries in Rcol comp by invoking COMPUTE-CD (line 6) and obtain a new pair Cl, Dl of compression, and decompression matrices respectively. Finally the compression matrix Cl is applied to the column-compressed matrices inRcol comp in order to obtain the Πl compressed matrices DOMAIN-COMPRESSION(X,R) 1 C0, D0 = COMPUTE-CD(X) 2 Xcomp = C0X // Compress the X matrix. 3 Rcomp = {} // Initialize an empty container for compressed matrices. 4 for l = 1 to L 5 Rcol comp = [Rl,πDl−1, ∀π = 1, . . . , |Πl|] // column compression 6 Cl, Dl = COMPUTE-CD(Rcol comp) 7 for π = 1 to |Πl| 8 Rcompl,π = ClR col comp π // row compression 9 return Xcomp, Rcomp Figure 3: DOMAIN-COMPRESSION of stratum Sl (line 8). Algorithm 3 allows us to compute the domain compressed version of Eq. 3 which can be obtained by replacing: X with Xcomp = C0X , Rl,π with R comp l,π = ClRl,πDl−1 and Hl with H comp l . Willing to recover the original encodings Hl we just need to employ the decompression matrix Dl on the compressed encodings H comp l , indeed Hl = DlH comp l . As we can see by substituting Sl with S comp l , the more are the symmetries (i.e. when |S comp l | |Sl|) the greater the domain compression will be. 3 RELATED WORKS When learning with graph inputs two fundamental design aspects that must be taken into account are: the choice of the pattern generator and the choice of the matching operator. The former decomposes the graph input in substructures while the latter allows to compare the substructures. Among the patterns considered from the graph kernel literature we have paths, shortest paths, walks (Kashima et al., 2003), subtrees (Ramon & Gärtner, 2003; Shervashidze et al., 2011) and neighborhood subgraphs (Costa & De Grave, 2010). The similarity between graphs G and G′ is computed by counting the number of matches between their common the substructures (i.e. a kernel on the sets of the substructures). The match between two substructures can be defined by using graph isomorphism or some other weaker graph invariant. When the number of substructures to enumerate is infinite or exponential with the size of the graph (perhaps this is the case for random walks and shortest paths respectively) the kernel between the two graphs is computed without generating an explicit feature map. Learning with an implicit feature map is not scalable as it has a space complexity quadratic in the number of training examples (because we need to store in memory the gram matrix). Other graph kernels such as the Weisfeiler-Lehman Subtree Kernel (WLST) (Shervashidze et al., 2011) and the Neighborhood Subgraph Pairwise Distance Kernel (NSPDK) (Costa & De Grave, 2010) deliberately choose a pattern generator that scales polynomially and produces an explicit feature map. However the vector representations produced by WLST and NSPDK are handcrafted and not learned. A recent work by Yanardag & Vishwanathan (2015) proposes to uses pattern generators such as graphlets, shortest paths and WLST subtrees to transform input graphs into documents. The generated substructures are then treated as words and embedded in the Euclidean space with a CBOW or a Skip-gram model. The deep upgrade of existing graph kernels is performed by reweighing the counts of the substructures by the square root of their word-vector self similarity. Another recent work by Niepert et al. (2016) upgrades the convolutional neural networks CNNs for images to graphs. While the receptive field of a CNN is usually a square window (Niepert et al., 2016) employ neighborhood subgraphs as receptive fields. As nodes in graphs do not have a specific temporal or spatial order, (Niepert et al., 2016) employ vertex invariants to impose an order on the nodes of the subgraphs/receptive fields. 4 EXPERIMENTAL EVALUATION We answer to the following experimental questions: Q1 How does SAEN compare to the state of the art? Q2 Can SAEN exploit symmetries in social networks to reduce the memory usage and the runtime? 4.1 DATASETS In order to answer the experimental questions we tested our method on six publicly available datasets first proposed by Yanardag & Vishwanathan (2015). • COLLAB is a dataset where each graph represent the ego-network of a researcher, and the task is to determine the field of study of the researcher between High Energy Physics, Condensed Matter Physics and Astro Physics. • IMDB-BINARY, IMDB-MULTI are datasets derived from IMDB where in each graph the vertices represent actors/actresses and the edges connect people which have performed in the same movie. Collaboration graphs are generated from movies belonging to genres Action and Romance for IMDB-BINARYand Comedy, Romance and Sci-Fi for IMDB-MULTI, and for each actor/actress in those genres an ego-graph is extracted. The task is to identify the genre from which the ego-graph has been generated. • REDDIT-BINARY, REDDIT-MULTI5K, REDDIT-MULTI12K are datasets where each graph is derived from a discussion thread from Reddit. In those datasets each vertex represent a distinct user and two users are connected by an edge if one of them has responded to a post of the other in that discussion. The task in REDDIT-BINARYis to discriminate between threads originating from a discussion-based subreddit (TrollXChromosomes, atheism) or from a question/answers-based subreddit (IAmA, AskReddit). The task in REDDIT-MULTI5Kand REDDIT-MULTI12Kis a multiclass classification problem where each graph is labeled with the subreddit where it has originated (worldnews, videos, AdviceAnimals, aww, mildlyinteresting for REDDIT-MULTI5Kand AskReddit, AdviceAnimals, atheism, aww, IAmA, mildlyinteresting, Showerthoughts, videos, todayilearned, worldnews, TrollXChromosomes for REDDIT-MULTI12K). 4.2 EXPERIMENTS In our experiments we chose an H-hierarchical decomposition called Ego Graph Neural Network (EGNN), that mimics the graph kernel NSPDK with the distance parameter set to 0. Before applying EGNN we turn unattributed graphs (V,E) into attributed graphs (V,E,X) by annotating their vertices v ∈ V with attributes xv ∈ X . We label vertices v of G with their degree and encode this information into the attributes xv by employing the 1-hot encoding. EGNN decomposes attributed graphs G = (V,E,X) into a 3 level H-hierarchical decomposition with the following strata (see Figure 1 for a pictorial representation of EGNN): • stratum S0 contains objects ov that are in one-to-one correspondence with the vertices v ∈ V . • stratum S1 contains vroot-rooted r-neighborhood subgraphs (i.e. ego graphs) e = (vroot, Ve, Ee) of radius r = 0, 1, . . . , R and has part-of alphabet Π1 = {ROOT, ELEM}. Objects ov ∈ S0 are “ELEM-part-of” ego graph e if v ∈ Ve \ {vroot}, while the are “ROOT-part-of” ego graph e if v = vroot. • stratum S2 contains the graph G that we want to classify and has part-of alphabet Π2 = {0, 1} which correspond to the radius of the ego graphs e ∈ S1 of which G is made of. E1 We experimented with SAEN applying the EGNN H-decomposition on all the datasets. For each dataset, we manually chose the parameters of SAEN, i.e. the number of hidden layers for each stratum, the size of each layer and the maximum radius R. We used the Leaky ReLU (Maas et al.) activation function on all the units. We report the chosen parameters in Table A1 of the appendix. In all our experiments we trained the neural networks by using the Adam algorithm to minimize a cross entropy loss. The classification accuracy of SAEN was measured with 10-times 10-fold cross-validation. We manually chose the number of layers and units for each level of the part-of decomposition; the number of epochs was chosen manually for each dataset and we kept the same value for all the 100 runs of the 10-times 10-fold cross-validation. The mean accuracies and their standard deviations obtained by our method are reported in Table 4, where we compare these results with those obtained by Yanardag & Vishwanathan (2015) and by Niepert et al. (2016). Although our method was conceived for social network data, it can also handle other types of graphs. For the sake of completeness in Table 5 we report the mean accuracies obtained with SAEN on the molecule and protein datasets studied in previous works (e.g. Niepert et al. (2016)). E2 In Table 1 we show the file sizes of the preprocessed datasets before and after the compression together with the data compression ratio. 3 We also estimate the benefit of the relational compression from a computational time point of view and report the measurement of the runtime for 1 run with and without compression together with the speedup factor. For the purpose of this experiment, all tests were run on a computer with two 8-cores Intel Xeon E5-2665 processors and 94 GB RAM. Uncompressed datasets which exhausted our server’s memory during the test are marked as “OOM” (out of memory) in the table, while those who exceeded the time limit of 100 times the time needed for the uncompressed version are marked as “TO” (timeout). 4.3 DISCUSSION A1 As shown in Table 4, EGNN performs consistently better than the other two methods on all the social network datasets. This confirms that the chosenH-hierarchical decomposition is effective on this kind of problems. Also the results for molecule and protein datasets (see Table 5) are in line with the current state of the art. A2 The compression algorithm has proven to be effective in improving the computational cost of our method. Most of the datasets improved their runtimes by a factor of at least 4 while maintaining the 3The size of the uncompressed files are shown for the sole purpose of computing the data compression ratio. Indeed the last version of our code compresses the files on the fly. same expressive power. Moreover, experiments on REDDIT-MULTI5K and REDDIT-MULTI12K have only been possible thanks to the size reduction operated by the algorithm as the script exhausted the memory while executing the training step on the uncompressed files. 5 CONCLUSIONS We proposed SAEN, a novel architecture for learning vector representations of H-decompositions of input graphs. We applied SAEN for graph classification on 6 real world social network datasets, outperforming the current state of the art on 4 of them and obtaining state-of-the-art classification accuracy on the others. Another important contribution of this paper is the domain compression algorithm which greatly reduces memory usage and allowed us to speedup the training time of a factor of at least 4. APPENDIX: SHIFT AGGREGATE EXTRACT NETWORKS Francesco Orsini 12 , Daniele Baracchi 2 and Paolo Frasconi 2 1 Department of Computer Science 2 Department of Information Engineering Katholieke Universiteit Leuven Università degli Studi di Firenze Celestijnenlaan 200A Via di Santa Marta 3 3001 Heverlee, Belgium I-50139 Firenze, Italy [email protected] [email protected] [email protected] A PARAMETERS USED IN THE EXPERIMENTS WITH EGNN In Table A1 we report for each dataset: the radiuses r of the neighborhood subgraphs used in the EGNN decomposition and the number of units in the hidden layers for each stratum. Figure A1: Parameters for the neural networks used in the experiments. DATASET RADIUSES HIDDEN UNITS r S0 S1 S2 COLLAB 0, 1 15− 5 5− 2 5− 3 IMDB-BINARY 0, 1, 2 2 5− 2 5− 3− 1 IMDB-MULTI 0, 1, 2 2 5− 2 5− 3 REDDIT-BINARY 0, 1 10− 5 5− 2 5− 3− 1 REDDIT-MULTI5K 0, 1 10 10 6− 5 REDDIT-MULTI12K 0, 1 10 10 20− 11 MUTAG 0, 1, 2, 3 10 5− 5 5− 5− 1 PTC 0, 1 15 15 15− 1 NCI1 0, 1, 2, 3 15 15 15− 10− 1 PROTEINS 0, 1, 2, 3 3− 2 6− 5− 4 6− 3− 1 D&D 0, 1, 2, 3 10 5− 2 5− 3− 1
1. What are the reviewer's concerns regarding the clarity and explanation of the paper's details? 2. How does the reviewer interpret the 'shift' and 'aggregate' operations in the SAEN structure, and why do they think averaging would be more appropriate than summing? 3. Why does the reviewer find the compression technique unclear, and what does it imply about the representations of objects at the same level? 4. What is the reviewer's concern regarding the explanation and reference to the 'ego graph patterns' and 'Ego Graph Neural Network' used in the experiments? 5. Overall, how does the reviewer assess the quality of the paper and its potential for publication?
Review
Review Some of the key details in this paper are very poorly explained or not even explained at all. The model sounds interesting and there may be something good here, but it should not be published in it's current form. Specific comments: The description of the R_l,pi convolutions in Section 2.1 was unclear. Specifically, I wasn't confident that I understood what the labels pi represented. The description of the SAEN structure in section 2.2 was worded poorly. My understanding, based on Equation 1, is that the 'shift' operation is simply a summation of the representations of the member objects, and that the 'aggregate' operation simply concatenates the representations from multiple relations. In the 'shift' step, it seems more appropriate to average over the object's member's representations h_j, rather than sum over them. The compression technique presented in Section 2.3 requires that multiple objects at a level have the same representation. Why would this ever occur, given that the representations are real valued and high-dimensional? The text is unintelligible: "two objects are equivalent if they are made by same sets of parts for all the pi-parameterizations of the R_l,pi decomposition relation." The 'ego graph patterns' in Figure 1 and 'Ego Graph Neural Network' used in the experiments are never explained in the text, and no references are given. Because of this, I cannot comment on the quality of the experiments.
ICLR
Title Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning Abstract We identify an implicit under-parameterization phenomenon in value-based deep RL methods that use bootstrapping: when value functions, approximated using deep neural networks, are trained with gradient descent using iterated regression onto target values generated by previous instances of the value network, more gradient updates decrease the expressivity of the current value network. We characterize this loss of expressivity via a drop in the rank of the learned value network features, and show that this typically corresponds to a performance drop. We demonstrate this phenomenon on Atari and Gym benchmarks, in both offline and online RL settings. We formally analyze this phenomenon and show that it results from a pathological interaction between bootstrapping and gradient-based optimization. We further show that mitigating implicit under-parameterization by controlling rank collapse can improve performance. 1 INTRODUCTION Many pervasive deep reinforcement learning (RL) algorithms estimate value functions using bootstrapping, that is, by sequentially fitting value functions to target value estimates generated from the value function learned in the previous iteration. Despite high-profile achievements (Silver et al., 2017), these algorithms are highly unreliable due to poorly understood optimization issues. Although a number of hypotheses have been proposed to explain these issues (Achiam et al., 2019; Bengio et al., 2020; Fu et al., 2019; Igl et al., 2020; Liu et al., 2018; Kumar et al., 2020a), a complete understanding remains elusive. We identify an “implicit under-parameterization” phenomenon that emerges when value networks are trained using gradient descent combined with bootstrapping. This phenomenon manifests as an excessive aliasing of features learned by the value network across states, which is exacerbated with more gradient updates. While the supervised deep learning literature suggests that some feature aliasing is desirable for generalization (e.g., Gunasekar et al., 2017; Arora et al., 2019), implicit under-parameterization exhibits more pronounced aliasing than in supervised learning. This over-aliasing causes an otherwise expressive value network to implicitly behave as an under-parameterized network, often resulting in poor performance. Implicit under-parameterization becomes aggravated when the rate of data re-use is increased, restricting the sample efficiency of deep RL methods. In online RL, increasing the number of gradient steps in between data collection steps for data-efficient RL (Fu et al., 2019; Fedus et al., 2020b) causes the problem to emerge more frequently. In the extreme case when no additional data is collected, referred to as offline RL (Lange et al., 2012; Agarwal et al., 2020; Levine et al., 2020), implicit under-parameterization manifests consistently, limiting the viability of offline methods. We demonstrate the existence of implicit under-parameterization in common value-based deep RL methods, including Q-learning (Mnih et al., 2015; Hessel et al., 2018) and actor-critic (Haarnoja et al., 2018), as well as neural fitted-Q iteration (Riedmiller, 2005; Ernst et al., 2005). To isolate the issue, we study the effective rank of the features in the penultimate layer of the value network (Section 3). We observe that after an initial learning period, the rank of the learned features drops steeply. As the rank decreases, the ability of the features to fit subsequent target values and the optimal value function generally deteriorates and results in a sharp decrease in performance (Section 3.1). ∗Equal Contribution. Correspondence to Aviral Kumar < [email protected] > and Rishabh Agarwal < [email protected] >. To better understand the emergence of implicit under-parameterization, we formally study the dynamics of Q-learning under two distinct models of neural net behavior (Section 4): kernel regression (Jacot et al., 2018; Mobahi et al., 2020) and deep linear networks (Arora et al., 2018). We corroborate the existence of this phenomenon in both models, and show that implicit underparameterization stems from a pathological interaction between bootstrapping and the implicit regularization of gradient descent. Since value networks are trained to regress towards targets generated by a previous version of the same model, this leads to a sequence of value networks of potentially decreasing expressivity, which can result in degenerate behavior and a drop in performance. The main contribution of this work is the identification of implicit under-parameterization in deep RL methods that use bootstrapping. Empirically, we demonstrate a collapse in the rank of the learned features during training, and show it typically corresponds to a drop in performance in the Atari (Bellemare et al., 2013) and continuous control Gym (Brockman et al., 2016) benchmarks in both the offline and data-efficient online RL settings. We verify the emergence of this phenomenon theoretically and characterize settings where implicit under-parameterization can emerge. We then show that mitigating this phenomenon via a simple penalty on the singular values of the learned features improves performance of value-based RL methods in the offline setting on Atari. 2 PRELIMINARIES The goal in RL is to maximize long-term discounted reward in a Markov decision process (MDP), defined as a tuple (S,A, R, P, γ) (Puterman, 1994), with state space S, action space A, a reward function R(s,a), transition dynamics P (s′|s,a) and a discount factor γ ∈ [0, 1). The Q-function Qπ(s,a) for a policy π(a|s), is the expected long-term discounted reward obtained by executing action a at state s and following π(a|s) thereafter, Qπ(s,a) := E [ ∑∞ t=0 γ tR(st,at)]. Qπ(s,a) is the fixed point of the Bellman operator T π , ∀s,a: T πQ(s,a) := R(s,a) + γEs′∼P (·|s,a),a′∼π(·|s′) [Q(s′,a′)], which can be written in vector form as: Qπ = R + γPπQπ . The optimal Q-function, Q∗(s,a), is the fixed point of the Bellman optimality operator T : T Q(s,a) := R(s,a) + γEs′∼P (·|s,a) [maxa′ Q(s′,a′)]. Practical Q-learning methods (e.g., Mnih et al., 2015; Hessel et al., 2018; Haarnoja et al., 2018) convert the Bellman equation into an bootstrapping-based objective for training a Q-network, Qθ, via gradient descent. This objective, known as mean-squared temporal difference (TD) error, is given by: L(θ) = ∑ s,a ( R(s,a) + γQ̄θ(s ′,a′)−Q(s,a) )2 , where Q̄θ is a delayed copy of the Q-function, typically referred to as the target network. These methods train Q-networks via gradient descent and slowly update the target network via Polyak averaging on its parameters. We refer the output of the penultimate layer of the deep Q-network as the learned feature matrix Φ, such that Q(s,a) = wTΦ(s,a), where w ∈ Rd and Φ ∈ R|S||A|×d. Algorithm 1 Fitted Q-Iteration (FQI) 1: Initialize Q-network Qθ , buffer µ. 2: for fitting iteration k in {1, . . . , N} do 3: Compute Qθ(s,a) and target values yk(s,a) = r + γmaxa′ Qk−1(s ′,a′) on {(s,a)} ∼ µ for training 4: Minimize TD error for Qθ via t = 1, · · · , T gradient descent updates, minθ (Qθ(s,a)− yk)2 5: end for For simplicity of analysis, we abstract deep Q-learning methods into a generic fitted Q-iteration (FQI) framework (Ernst et al., 2005). We refer to FQI with neural nets as neural FQI (Riedmiller, 2005). In the k-th fitting iteration, FQI trains the Q-function, Qk, to match the target values, yk = R+γPπQk−1 generated using previous Q-function, Qk−1 (Algorithm 1). Practical methods can be instantiated as variants of FQI, with different target update styles, different optimizers, etc. 3 IMPLICIT UNDER-PARAMETERIZATION IN DEEP Q-LEARNING In this section, we empirically demonstrate the existence of implicit under-parameterization in deep RL methods that use bootstrapping. We characterize implicit under-parameterization in terms of the effective rank (Yang et al., 2019) of the features learned by a Q-network. The effective rank of the feature matrix Φ, for a threshold δ (we choose δ = 0.01), denoted as srankδ(Φ), is given by srankδ(Φ) = min { k : ∑k i=1 σi(Φ)∑d i=1 σi(Φ) ≥ 1− δ } , where {σi(Φ)} are the singular values of Φ in decreasing order, i.e., σ1 ≥ · · · ≥ σd ≥ 0. Intuitively, srankδ(Φ) represents the number of “effective” unique components of the feature matrix Φ that form the basis for linearly approximating the Qvalues. When the network maps different states to orthogonal feature vectors, then srankδ(Φ) has high values close to d. When the network “aliases” state-action pairs by mapping them to a smaller subspace, Φ has only a few active singular directions, and srankδ(Φ) takes on a small value. Definition 1. Implicit under-parameterization refers to a reduction in the effective rank of the features, srankδ(Φ), that occurs implicitly as a by-product of learning deep neural network Q-functions. While rank decrease also occurs in supervised learning, it is usually beneficial for obtaining generalizable solutions (Gunasekar et al., 2017; Arora et al., 2019). However, we will show that in deep Q-learning, an interaction between bootstrapping and gradient descent can lead to more aggressive rank reduction (or rank collapse), which can hurt performance. Experimental setup. To study implicit under-parameterization empirically, we compute srankδ(Φ) on a minibatch of state-action pairs sampled i.i.d. from the training data (i.e., the dataset in the offline setting, and the replay buffer in the online setting). We investigate offline and online RL settings on benchmarks including Atari games (Bellemare et al., 2013) and Gym environments (Brockman et al., 2016). We also utilize gridworlds described by Fu et al. (2019) to compare the learned Q-function against the oracle solution computed using tabular value iteration. We evaluate DQN (Mnih et al., 2015) on gridworld and Atari and SAC (Haarnoja et al., 2018) on Gym domains. Data-efficient offline RL. In offline RL, our goal is to learn effective policies by performing Qlearning on a fixed dataset of transitions. We investigate the presence of rank collapse when deep Q-learning is used with broad state coverage offline datasets from Agarwal et al. (2020). In the top row of Figure 2, we show that after an initial learning period, srankδ(Φ) decreases in all domains (Atari, Gym and the gridworld). The final value of srankδ(Φ) is often quite small – e.g., in Atari, only 20-100 singular components are active for 512-dimensional features, implying significant underutilization of network capacity. Since under-parameterization is implicitly induced by the learning process, even high-capacity value networks behave as low-capacity networks as more training is performed with a bootstrapped objective (e.g., mean squared TD error). On the gridworld environment, regressing toQ∗ using supervised regression results in a much higher srankδ(Φ) (black dashed line in Figure 2(left)) than when using neural FQI. On Atari, even when a 4x larger offline dataset with much broader coverage is used (blue line in Figure 2), rank collapse still persists, indicating that implicit under-parameterization is not due to limited offline dataset size. Figure 2 (2nd row) illustrates that policy performance generally deteriorates as srank(Φ) drops, and eventually collapses simultaneously with the rank collapse. While we do not claim that implicit under-parameterization is the only issue in deep Q-learning, the results in Figure 2 show that the emergence of this under-parameterization is strongly associated with poor performance. To prevent confounding from the distribution mismatch between the learned policy and the offline dataset, which often affects the performance of Q-learning methods, we also study CQL (Kumar et al., 2020b), an offline RL algorithm designed to handle distribution mismatch. We find a similar degradation in effective rank and performance for CQL (Figure A.3), implying that underparameterization does not stem from distribution mismatch and arises even when the resulting policy is within the behavior distribution (though the policy may not be exactly pick actions observed in the dataset). We provide more evidence in Atari and Gym domains in Appendix A.1. Data-efficient online RL. Deep Q-learning methods typically use very few gradient updates (n) per environment step (e.g., DQN takes 1 update every 4 steps on Atari, n = 0.25). Improving the sample efficiency of these methods requires increasing n to utilize the replay data more effectively. However, we find that using larger values of n results in higher levels of rank collapse as well as performance degradation. In the top row of Figure 3, we show that larger values of n lead to a more aggressive drop in srankδ(Φ) (red vs. blue/orange lines), and that rank continues to decrease with more training. Furthermore, the bottom row illustrates that larger values of n result in worse performance, corroborating Fu et al. (2019); Fedus et al. (2020b). We find similar results with the Rainbow algorithm (Hessel et al., 2018) (Appendix A.2). As in the offline setting, directly regressing to Q∗ via supervised learning does not cause rank collapse (black line in Figure 3). 3.1 UNDERSTANDING IMPLICIT UNDER-PARAMETERIZATION AND ITS IMPLICATIONS How does implicit under-parameterization degrade performance? Having established the presence of rank collapse in data-efficient RL, we now discuss how it can adversely affect performance. As the effective rank of the network features Φ decreases, so does the network’s ability to fit the subsequent target values, and eventually results in inability to fit Q∗. In the gridworld domain, we measure this loss of expressivity by measuring the error in fitting oracle-computed Q∗ values via a linear transformation of Φ. When rank collapse occurs, the error in fitting Q∗ steadily increases during training, and the consequent network is not able to predict Q∗ at all by the end of training (Figure 4a) – this entails a drop in performance. In Atari domains, we do not have access to Q∗, and so we instead measure TD error, that is, the error in fitting the target value estimates, R + γPπQk. In SEAQUEST, as rank decreases, the TD error increases (Figure 4b) and the value function is unable to fit the target values, culminating in a performance plateau (Figure 3). This observation is consistent across other environments; we present further supporting evidence in Appendix A.4. Does bootstrapping cause implicit under-parameterization? We perform a number of controlled experiments in the gridworld and Atari environments to isolate the connection between rank collapse and bootstrapping. We first remove confounding issues of poor network initialization (Fedus et al., 2020a) and non-stationarity (Igl et al., 2020) by showing that rank collapse occurs even when the Q-network is re-initialized from scratch at the start of each fitting iteration (Figure 4c). To show that the problem is not isolated to the control setting, we show evidence of rank collapse in the policy evaluation setting as well. We trained a value network using fitted Q-evaluation for a fixed policy π (i.e., using the Bellman operator T π instead of T ), and found that rank drop still occurs (FQE in Figure 4d). Finally, we show that by removing bootstrapped updates and instead regressing directly to Monte-Carlo (MC) estimates of the value, the effective rank does not collapse (MC Returns in Figure 4d). These results, along with similar findings on other Atari environments (Appendix A.3), our analysis indicates that bootstrapping is at the core of implicit under-parameterization. 4 THEORETICAL ANALYSIS OF IMPLICIT UNDER-PARAMETERIZATION In this section, we formally analyze implicit under-parameterization and prove that training neural networks with bootstrapping reduces the effective rank of the Q-network, corroborating the empirical observations in the previous section. We focus on policy evaluation (Figure 4d and Figure A.9), where we aim to learn a Q-function that satisfies Q = R+γPπQ for a fixed π, for ease of analysis. We also presume a fixed dataset of transitions, D, to learn the Q-function. 4.1 ANALYSIS VIA KERNEL REGRESSION We first study bootstrapping with neural networks through a mathematical abstraction that treats the Q-network as a kernel machine, following the neural tangent kernel (NTK) formalism (Jacot et al., 2018). Building on prior analysis of self-distillation (Mobahi et al., 2020), we assume that each iteration of bootstrapping, the Q-function optimizes the squared TD error to target labels yk with a kernel regularizer. This regularizer captures the inductive bias from gradient-based optimization of TD error and resembles the regularization imposed by gradient descent under NTK (Mobahi et al., 2020). The error is computed on (si,ai) ∈ D whereas the regularization imposed by a universal kernel u with a coefficient of c ≥ 0 is applied to the Q-values at all state-action pairs as shown in Equation 1. We consider a setting c > 0 for all rounds of bootstrapping, which corresponds to the solution obtained by performing gradient descent on TD error for a small number of iterations with early stopping in each round (Suggala et al., 2018) and thus, resembles how the updates in Algorithm 1 are typically implemented in practice. Qk+1 ← arg min Q∈Q ∑ si,ai∈D (Q(si,ai)− yk(si,ai))2 + c ∑ (s,a) ∑ (s′,a′) u((s,a), (s′,a′))Q(s,a)Q(s′,a′). (1) The solution to Equation 1 can be expressed as Qk+1(s,a) = gT(s,a)(cI + G) −1yk, where G is the Gram matrix for a special positive-definite kernel (Duffy, 2015) and g(s,a) denotes the row of G corresponding to the input (s,a) (Mobahi et al., 2020, Proposition 1). A detailed proof is in Appendix C. When combined with the fitted Q-iteration recursion, setting labels yk = R + γPπQk−1, we recover a recurrence that relates subsequent value function iterates Qk+1 = G(cI + G) −1yk = G(cI + G) −1︸ ︷︷ ︸ A [R + γPπQk] = A (∑k i=1 γ k−i (PπA) k−i ) R := AMkR. (2) Equation 2 follows from unrolling the recurrence and setting the algorithm-agnostic initial Q-value, Q0, to be 0. We now show that the sparsity of singular values of the matrix Mk generally increases over fitting iterations, implying that the effective rank of Mk diminishes with more iterations. For this result, we assume that the matrix S is normal, i.e., the norm of the (complex) eigenvalues of S is equal to its singular values. We will discuss how this assumption can be relaxed in Appendix A.7. Theorem 4.1. Let S be a shorthand for S = γPπA and assume S is a normal matrix. Then there exists an infinite, strictly increasing sequence of fitting iterations, (kl)∞l=1 starting from k1 = 0, such that, for any two singular-values σi(S) and σj(S) of S with σi(S) < σj(S), ∀ l ∈ N and l′ ≥ l, σi(Mkl′ ) σj(Mkl′ ) < σi(Mkl) σj(Mkl) ≤ σi(S) σj(S) . (3) Hence, srankδ(Mkl′ ) ≤ srankδ(Mkl). Moreover, if S is positive semi-definite, then (kl) ∞ l=1 = N, i.e., srank continuously decreases in each fitting iteration. We provide a proof of the theorem above as well as present a stronger variant that shows a gradual decrease in the effective rank for fitting iterations outside this infinite sequence in Appendix C. As k increases along the sequence of iterations given by k = (kl)∞l=1, the effective rank of the matrix Mk drops, leading to low expressivity of this matrix. Since Mk linearly maps rewards to the Qfunction (Equation 2), drop in expressivity results of Mk in the inability to model the actual Qπ . Summary of our analysis. Our analysis of bootstrapping and gradient descent from the view of regularized kernel regression suggests that rank drop happens with more training (i.e., with more rounds of bootstrapping). In contrast to self-distillation (Mobahi et al., 2020), rank drop may not happen in every iteration (and rank may increase between two consecutive iterations occasionally), but srankδ exhibits a generally decreasing trend. 4.2 ANALYSIS WITH DEEP LINEAR NETWORKS UNDER GRADIENT DESCENT While Section 4.1 demonstrates rank collapse will occur in a kernel-regression model of Q-learning, it does not illustrate when the rank collapse occurs. To better specify a point in training when rank collapse emerges, we present a complementary derivation for the case when the Q-function is represented as a deep linear neural network (Arora et al., 2019), which is a widely-studied setting for analyzing implicit regularization of gradient descent in supervised learning (Gunasekar et al., 2017; 2018; Arora et al., 2018; 2019). Our analysis will show that rank collapse can emerge as the generated target values begin to approach the previous value estimate, in particular, when in the vicinity of the optimal Q-function. Proof strategy. Our proof consists of two steps: (1) We show that the effective rank of the feature matrix decreases within one fitting iteration (for a given target value) due to the low-rank affinity, (2) We show that this effective rank drop is “compounded” as we train using a bootstrapped objective. Proposition 4.1 explains (1) and Proposition 4.2, Theorem 4.2 and Appendix D.2 discuss (2). Additional notation and assumptions. We represent the Q-function as a deep linear network with at ≥ 3 layers, such that Q(s,a) = WNWφ[s;a], where N ≥ 3, WN ∈ R1×dN−1 and Wφ = WN−1WN−2 · · ·W1 with Wi ∈ Rdi×di−1 for i = 1, . . . , N − 1. Wφ maps an input [s;a] to corresponding penultimate layer’ features Φ(s,a). Let Wj(k, t) denotes the weight matrix Wj at the t-th step of gradient descent during the k-th fitting iteration (Algorithm 1). We define Wk,t = WN (k, t)Wφ(k, t) and LN,k+1(Wk,t) as the TD error objective in the k-th fitting iteration. We study srankδ(Wφ(k, t)) since the rank of features Φ = Wφ(k, t)[S,A] is equal to rank of Wφ(k, t) provided the state-action inputs have high rank. We assume that the evolution of the weights is governed by a continuous-time differential equation (Arora et al., 2018) within each fitting iteration k. To simplify analysis, we also assume that all except the last-layer weights follow a “balancedness” property (Equation D.4), which suggests that the weight matrices in the consecutive layers in the deep linear network share the same singular values (but with different permutations). However, note that we do not assume balancedness for the last layer which trivially leads to rank-1 features, making our analysis strictly more general than conventionally studied deep linear networks. In this model, we can characterize the evolution of the singular values of the feature matrix Wφ(k, t), using techniques analogous to Arora et al. (2019): Proposition 4.1. The singular values of the feature matrix Wφ(k, t) evolve according to: σ̇r(k, t) = −N · ( σ2r(k, t) )1− 1N−1 · 〈WN (k, t)T dLN,k+1(WK,t) dW ,ur(k, t)vr(k, t) T 〉 , (4) for r = 1, · · · ,minN−1i=1 di, where ur(k, t) and vr(k, t) denote the left and right singular vectors of the feature matrix, Wφ(k, t), respectively. 0 100 200 300 400 500 Gradient Updates 100 102 104 106 108 si ng ul ar v al ue s (l og s ca le ) Seaquest (Singular values) σmax σ2 σ3 σ10 σ100 Evolution of singular values of Wφ on SEAQUEST Solving the differential equation (4) indicates that larger singular values will evolve at a exponentially faster rate than smaller singular values (as we also formally show in Appendix D.1) and the difference in their magnitudes disproportionately increase with increasing t. This behavior also occurs empirically, illustrated in the figure on the right (also see Figure D.1), where larger singular values are orders of magnitude larger than smaller singular values. Hence, the effective rank, srankδ(Wφ(k, t)), will decrease with more gradient steps within a fitting iteration k. Abstract optimization problem for the low-rank solution. Building on Proposition 4.1, we note that the final solution obtained in a bootstrapping round (i.e., fitting iteration) can be equivalently expressed as the solution that minimizes a weighted sum of the TD error and a data-dependent implicit regularizer hD(Wφ,WN ) that encourages disproportionate singular values of Wφ, and hence, a low effective rank of Wφ. While the actual form for h is unknown, to facilitate our analysis of bootstrapping, we make a simplification and express this solution as the minimum of Equation 5. min Wφ,WN∈M ||WNWφ[s;a]− yk(s,a)||2 + λksrankδ(Wφ). (5) Note that the entire optimization path may not correspond to the objective in Equation 5, but the Equation 5 represents the final solution of a given fitting iteration. M denotes the set of constraints that WN obtained via gradient optimization of TD error must satisfy, however we do not need to explicitly quantifyM in our analysis. λk is a constant that denotes the strength of rank regularization. Since srankδ is always regularized, our analysis assumes that λk > 0 (see Appendix D.1). Rank drop within a fitting iteration “compounds” due to bootstrapping. In the RL setting, the target values are given by yk(s,a) = r(s,a) + γPπQk−1(s,a). First note that when r(s,a) = 0 and Pπ = I, i.e., when the bootstrapping update resembles self-regression, we first note that just “copying over weights” from iteration k− 1 to iteration k is a feasible point for solving Equation 5, which attains zero TD error with no increase in srankδ . A better solution to Equation 5 can thus be obtained by incurring non-zero TD error at the benefit of a decreased srank, indicating that in this setting, srankδ(Wφ) drops in each fitting iteration, leading to a compounding rank drop effect. We next extend this analysis to the full bootstrapping setting. Unlike the self-training setting, yk(s,a) is not directly expressible as a function of the previous Wφ(k, T ) due to additional reward and dynamics transformations. Assuming closure of the function class (Assumption D.1) under the Bellman update (Munos & Szepesvári, 2008; Chen & Jiang, 2019), we reason about the compounding effect of rank drop across iterations in Proposition 4.2 (proof in Appendix D.2). Specifically, srankδ can increase in each fitting iteration due to R and Pπ transformations, but will decrease due to low rank preference of gradient descent. This change in rank then compounds as shown below. Proposition 4.2. Assume that the Q-function is initialized to Wφ(0) and WN (0). Let the Q-function class be closed under the backup, i.e., ∃WPN ,WPφ , s.t. (R + γPπQk−1) T = WPN (k)W P φ (k)[S;A]T , and assume that the change in srank due to dynamics and reward transformations is bounded: srankδ(WPφ (k)) ≤ srankδ(Wφ(k − 1)) + ck. Then, srankδ(Wφ(k)) ≤ srankδ(Wφ(0)) + k∑ j=1 cj − k∑ j=1 ||Qj − yj || λj . Proposition 4.2 provides a bound on the value of srank after k rounds of bootstrapping. srank decreases in each iteration due to non-zero TD errors, but potentially increases due to reward and bootstrapping transformations. To instantiate a concrete case where rank clearly collapses, we investigate ck as the value function gets closer to the Bellman fixed point, which is a favourable initialization for the Q-function in Theorem 4.2. In this case, the learning dynamics begins to resemble the self-training regime, as the target values approach the previous value iterate yk ≈ Qk−1, and thus, as we show next, the potential increase in srank (ck in Proposition 4.2) converges to 0. Theorem 4.2. Suppose target values yk = R+γPπQk−1 are close to the previous value estimate Qk−1, i.e. ∀ s,a, yk(s,a) = Qk−1(s,a)+ε(s,a), with |ε(s,a)| |Qk−1(s,a)|. Then, there is a constant 0 depending upon WN and Wφ, such that for all ‖ε‖ < ε0, ck = 0. Thus, srank decreases in iteration k: srankδ(Wφ(k)) ≤ srankδ(Wφ(k − 1))− ||Qk − yk||/λk. We provide a complete form, including the expression for 0 and a proof in Appendix D.3. To empirically show the consequence of Theorem 4.2 that a decrease in srankδ(Wφ) values can lead to an increase in the distance to the fixed point in a neighborhood around the fixed point, we performed a controlled experiment on a deep linear net shown in Figure 5 that measures the relationship between of srankδ(Φ) and the error to the projected TD fixed point |Q −Q∗|. Note that a drop in srankδ(Φ) corresponds to a increased value of |Q−Q∗| indicating that rank drop when Q get close to a fixed point can affect convergence to it. 5 MITIGATING UNDER-PARAMETRIZATION IMPROVES DEEP Q-LEARNING We now show that mitigating implicit under-parameterization by preventing rank collapse can improve performance. We place special emphasis on the offline RL setting in this section, since it is particularly vulnerable to the adverse effects of rank collapse. We devise a penalty (or a regularizer) Lp(Φ) that encourages higher effective rank of the learned features, srankδ(Φ), to prevent rank collapse. The effective rank function srankδ(Φ) is non-differentiable, so we choose a simple surrogate that can be optimized over deep networks. Since effective rank is maximized when the magnitude of the singular values is roughly balanced, one way to increase effective rank is to minimize the largest singular value of Φ, σmax(Φ), while simultaneously maximizing the smallest singular value, σmin(Φ). We construct a simple penalty Lp(Φ) derived from this intuition, given by: Lp(Φ) = σ2max(Φ)− σ2min(Φ). (6) Lp(Φ) can be computed by invoking the singular value decomposition subroutines in standard automatic differentiation frameworks (Abadi et al., 2016; Paszke et al., 2019). We estimate the singular values over the feature matrix computed over a minibatch, and add the resulting value of Lp as a penalty to the TD error objective, with a tradeoff factor α = 0.001. Does Lp(Φ) address rank collapse? We first verify whether controlling the minimum and maximum singular values using Lp(Φ) actually prevents rank collapse. When using this penalty on the gridworld problem (Figure 6a), the effective rank does not collapse, instead gradually decreasing at the onset and then plateauing, akin to the evolution of effective rank in supervised learning. In Figure 6b, we plot the evolution of effective rank on two Atari games in the offline setting (all games in Appendix A.5), and observe that using Lp also generally leads to increasing rank values. Does mitigating rank collapse improve performance? We now evaluate the performance of the penalty using DQN (Mnih et al., 2015) and CQL (Kumar et al., 2020b) on Atari dataset from Agarwal et al. (2020) (5% replay data), used in Section 3. Figure 7 summarizes the relative improvement from using the penalty for 16 Atari games. Adding the penalty to DQN improves performance on all 16/16 games with a median improvement of 74.5%; adding it to CQL, a state-of-the-art offline algorithm, improves performance on 11/16 games with median improvement of 14.1%. Prior work has discussed that standard Q-learning methods designed for the online setting, such as DQN, are generally ineffective with small offline datasets (Kumar et al., 2020b; Agarwal et al., 2020). Our results show that mitigating rank collapse makes even such simple methods substantially more effective in this setting, suggesting that rank collapse and the resulting implicit under-parameterization may be an crucial piece of the puzzle in explaining the challenges of offline RL. Za xx on Ya rs Re ve ng e Po ng Sp ac eI nv ad er s Ro ad Ru nn er M sP ac m an As te rix Be am Ri de r Qb er t Ja m es bo nd En du ro W iza rd Of W or Ice Ho ck ey Se aq ue st Do ub le Du nk De m on At ta ck Game 0 100 101 102 103 % Im pr ov em en t ( Lo g sc al e) DQN DQN w/ Penalty Po ng Za xx on W iza rd Of W or Ya rs Re ve ng e Qb er t Ro ad Ru nn er M sP ac m an En du ro Ice Ho ck ey Ja m es bo nd Do ub le Du nk As te rix Sp ac eI nv ad er s Be am Ri de r Se aq ue st De m on At ta ck Game -102 -101 -100 0 100 101 102 103 % Im pr ov em en t ( Lo g sc al e) CQL CQL w/ Penalty Figure 7: DQN and CQL with Lp(Φ) penalty vs. their standard counterparts in the 5% offline setting on Atari from Section 3. Lp improves DQN on 16/16 and CQL on 11/16 games. We also evaluated the regularizer Lp(Φ) in the dataefficient online RL setting, with results in Appendix A.6. This variant achieved median improvement of 20.6% performance with Rainbow (Hessel et al., 2018), however performed poorly with DQN, where it reduced median performance by 11.5%. Thus, while our proposed penalty is effective in many cases in offline and online settings, it does not solve the problem fully, i.e., it does not address the root cause of implicit under-parameterization and only addresses a symptom, and a more sophisticated solution may better prevent the issues with implicit under-parameterization. Nevertheless, our results suggest that mitigation of implicit under-parameterization can improve performance of data-efficient RL. 6 RELATED WORK Prior work has extensively studied the learning dynamics of Q-learning with tabular and linear function approximation, to study error propagation (Munos, 2003; Farahmand et al., 2010) and to prevent divergence (De Farias, 2002; Maei et al., 2009; Sutton et al., 2009; Dai et al., 2018), as opposed to deep Q-learning analyzed in this work. Q-learning has been shown to have favorable optimization properties with certain classes of features (Ghosh & Bellemare, 2020), but our work shows that the features learned by a neural net when minimizing TD error do not enjoy such guarantees, and instead suffer from rank collapse. Recent theoretical analyses of deep Q-learning have shown convergence under restrictive assumptions (Yang et al., 2020; Cai et al., 2019; Zhang et al., 2020; Xu & Gu, 2019), but Theorem 4.2 shows that implicit under-parameterization appears when the estimates of the value function approach the optimum, potentially preventing convergence. Xu et al. (2005; 2007) present variants of LSTD (Boyan, 1999), which model the Q-function as a kernel-machine but do not take into account the regularization from gradient descent, as done in Equation 1, which is essential for implicit under-parameterization. Igl et al. (2020); Fedus et al. (2020a) argue that non-stationarity arising from distribution shift hinders generalization and recommend periodic network re-initialization. Under-parameterization is not caused by this distribution shift, and we find that network re-initialization does little to prevent rank collapse (Figure 4c). Luo et al. (2020) proposes a regularization similar to ours, but in a different setting, finding that more expressive features increases performance of on-policy RL methods. Finally, Yang et al. (2019) study the effective rank of the Q∗-values when expressed as a |S| × |A| matrix in online RL and find that low ranks for this Q∗-matrix are preferable. We analyze a fundamentally different object: the learned features (and illustrate that a rank-collapse of features can hurt), not the Q∗-matrix, whose rank is upper-bounded by the number of actions (e.g., 24 for Atari). 7 DISCUSSION We identified an implicit under-parameterization phenomenon in deep RL algorithms that use bootstrapping, where gradient-based optimization of a bootstrapped objective can lead to a reduction in the expressive power of the value network. This effect manifests as a collapse of the rank of the features learned by the value network, causing aliasing across states and often leading to poor performance. Our analysis reveals that this phenomenon is caused by the implicit regularization due to gradient descent on bootstrapped objectives. We observed that mitigating this problem by means of a simple regularization scheme improves performance of deep Q-learning methods. While our proposed regularization provides some improvement, devising better mitigation strategies for implicit under-parameterization remains an exciting direction for future work. Our method explicitly attempts to prevent rank collapse, but relies on the emergence of useful features solely through the bootstrapped signal. An alternative path may be to develop new auxiliary losses (e.g., Jaderberg et al., 2016) that learn useful features while passively preventing underparameterization. More broadly, understanding the effects of neural nets and associated factors such as initialization, choice of optimizer, etc. on the learning dynamics of deep RL algorithms, using tools from deep learning theory, is likely to be key towards developing robust and data-efficient deep RL algorithms. ACKNOWLEDGEMENTS We thank Lihong Li, Aaron Courville, Aurick Zhou, Abhishek Gupta, George Tucker, Ofir Nachum, Wesley Chung, Emmanuel Bengio, Zafarali Ahmed, and Jacob Buckman for feedback on an earlier version of this paper. We thank Hossein Mobahi for insightful discussions about self-distillation and Hanie Sedghi for insightful discussions about implicit regularization and generalization in deep networks. We additionally thank Michael Janner, Aaron Courville, Dale Schuurmans and Marc Bellemare for helpful discussions. AK was partly funded by the DARPA Assured Autonomy program, and DG was supported by a NSF graduate fellowship and compute support from Amazon. Appendices A ADDITIONAL EVIDENCE FOR IMPLICIT UNDER-PARAMETERIZATION In this section, we present additional evidence that demonstrates the existence of the implicit underparameterization phenomenon from Section 3. In all cases, we plot the values of srankδ(Φ) computed on a batch size of 2048 i.i.d. sampled transitions from the dataset. DQN (4x data) A.1 OFFLINE RL A.2 DATA EFFICIENT ONLINE RL In the data-efficient online RL setting, we verify the presence of implicit under-parameterization on both DQN and Rainbow (Hessel et al., 2018) algorithms when larger number of gradient updates are made per environment step. In these settings we find that more gradient updates per environment step lead to a larger decrease in effective rank, whereas effective rank can increase when the amount of data re-use is reduced by taking fewer gradient steps. A.3 DOES BOOTSTRAPPING CAUSE IMPLICIT UNDER-PARAMETERIZATION? In this section, we provide additional evidence to support our claim from Section 3 that suggests that bootstrapping-based updates are a key component behind the existence of implicit underparameterization. To do so, we empirically demonstrate the following points empirically: MC returns • For the final point in this section, we verify if the non-stationarity of the policy in the Qlearning (control) setting, i.e., when the Bellman optimality operator is used is not a reason behind the emergence of implicit under-parameterization. The non-stationary policy in a control setting causes the targets to change and, as a consequence, leads to non-zero errors. However, rank drop is primarily caused by bootstrapping rather than non-stationarity of the control objective. To illustrate this, we ran an experiment in the control setting on Gridworld, regressing to the target computed using the true value function Qπ for the current policy π (computed using tabular Q-evaluation) instead of using the bootstrap TD estimate. The results, shown in figure A.11a, show that the srankδ doesn’t decrease significantly when regressing to true control values and infact increases with more iterations as compared to Figure 6a where rank drops with bootstrapping. This experiment, alongside with experiments discussed above, ablating bootstrapping in the stationary policy evaluation setting shows that rank-deficiency is caused due to bootstrapping. A.4 HOW DOES IMPLICIT REGULARIZATION INHIBIT DATA-EFFICIENT RL? Implicit under-parameterization leads to a trade-off between minimizing the TD error vs. encouraging low rank features as shown in Figure 4b. This trade-off often results in decrease in effective rank, at the expense of increase in TD error, resulting in lower performance. Here we present additional evidence to support this. Figure A.11b shows a gridworld problem with one-hot features, which naturally leads to reduced state-aliasing. In this setting, we find that the amount of rank drop with respect to the supervised projection of oracle computed Q∗ values is quite small and the regression error to Q∗ actually decreases unlike the case in Figure 4a, where it remains same or even increases. The method is able to learn policies that attain good performance as well. Hence, this justifies that when there’s very little rank drop, for example, 5 rank units in the example on the right, FQI methods are generally able to learn Φ that is able to fit Q∗. This provides evidence showing that typical Q-networks learn Φ that can fit the optimal Q-function when rank collapse does not occur. In Atari, we do not have access to Q∗, and so we instead measure the error in fitting the target value estimates, R + γPπQk. As rank decreases, the TD error increases (Figure A.12) and the value function is unable to fit the target values, culminating in a performance plateau (Figure A.6). A.5 TRENDS IN VALUES OF EFFECTIVE RANK WITH PENALTY. In this section, we present the trend in the values of the effective rank when the penalty Lp(Φ) is added. In each plot below, we present the value of srankδ(Φ) with and without penalty respectively. A.5.1 OFFLINE RL: DQN A.5.2 OFFLINE RL: CQL WITH Lp(Φ) PENALTY A.6 DATA-EFFICIENT ONLINE RL: RAINBOW A.6.1 RAINBOW WITH Lp(Φ) PENALTY: RANK PLOTS A.6.2 RAINBOW WITH Lp(Φ) PENALTY: PERFORMANCE In this section, we present additional results for supporting the hypothesis that preventing rank-collapse leads to better performance. In the first set of experiments, we apply the proposed Lp penalty to Rainbow in the data-efficient online RL setting (n = 4). In the second set of experiments, we present evidence for prevention of rank collapse by comparing rank values for different runs. As we will show in the next section, the state-of-the-art Rainbow (Hessel et al., 2018) algorithm also suffers form rank collapse in the data-efficient online RL setting when more updates are performed per gradient step. In this section, we applied our penalty Lp to Rainbow with n = 4, and obtained a median 20.66% improvement on top of the base method. This result is summarized below. A.7 RELAXING THE NORMALITY ASSUMPTION IN THEOREM 4.1 We can relax the normality assumption on S in Theorem 4.1. An analogous statement holds for non-normal matrices S for a slightly different notion of effective rank, denoted as srankδ,λ(Mk), that utilizes eigenvalue norms instead of singular values. Formally, let λ1(Mk), · · · , λ2(Mk), · · · be the (complex) eigenvalues of Mk arranged in decreasing order of their norms, i.e., , |λ1(Mk)| ≥ |λ2(Mk)| ≥ · · · , then, srankδ,λ(Mk) = min { k : ∑k i=1 |λi(Mk)|∑d i=1 |λi(Mk)| ≥ 1− δ } . A statement essentially analogous to Theorem 4.1 suggests that in this general case, srankδ,λ(Mk) decreases for all (complex) diagonalizable matrices S, which is the set of almost all matrices of size dim(S). Now, if S is approximately normal, i.e., when |σi(S)− |λi(S)|| is small, then the result in Theorem 4.1 also holds approximately as we discuss at the end of Appendix C. We now provide empirical evidence showing that the trend in the values of effective rank computed using singular values, srankδ(Φ) is almost identical to the trend in the effective rank computed using normalized eigenvalues, srankδ,λ(Φ). Since eigenvalues are only defined for a square matrix Φ, in practice, we use a batch of d = dim(φ(s,a)) state-action pairs for computing the eigenvalue rank and compare to the corresponding singular value rank in Figures A.20 and A.21. Connection to Theorem 4.1. We computed the effective rank of Φ instead of S, since S is a theoretical abstraction that cannot be computed in practice as it depends on the Green’s kernel (Duffy, 2015) obtained by assuming that the neural network behaves as a kernel regressor. Instead, we compare the different notions of ranks of Φ since Φ is the practical counterpart for the matrix, S, when using neural networks (as also indicated by the analysis in Section 4.2). In fact, on the gridworld (Figure A.21), we experiment with a feature Φ with dimension equal to the number of state-action pairs, i.e., dim(φ(s,a)) = |S||A|, with the same number of parameters as a kernel parameterization of the Q-function: Q(s,a) = ∑ s′,a′ w(s ′,a′)k(s,a, s′,a′). This can also be considered as performing gradient descent on a “wide” linear network , and we measure the feature rank while observing similar rank trends. Since we do not require the assumption that S is normal in Theorem 4.1 to obtain a decreasing trend in srankδ,λ(Φ), and we find that in practical scenarios (Figures A.20 and A.21), srankδ(Φ) ≈ srankδ,λ(Φ) with an extremely similar qualitative trend we believe that Theorem 4.1 still explains the rank-collapse practically observed in deep Q-learning and is not vacuous. A.8 NORMALIZED PLOTS FOR FIGURE 3/ FIGURE A.6 In this section, we provide a set of normalized srank and performance trends for Atari games (the corresponding unnormalized plots are found in Figure A.6). In these plots, each unit on the x-axis is equivalent to one gradient update, and so since n = 8 prescribes 8× many updates as compared to n = 1, it it runs for 8× as long as n = 1. These plots are in Figure A.22. Note that the trend that effective rank decreases with larger n values also persists when rescaling the x-axis to account for the number of gradient steps, in all but one game. This is expected since it tells us that performing bootstrapping based updates in the data-efficient setting (larger n values) still leads to more aggressive rank drop as updates are being performed on a relatively more static dataset for larger values of n. B HYPERPARAMETERS & EXPERIMENT DETAILS B.1 ATARI EXPERIMENTS We follow the experiment protocol from Agarwal et al. (2020) for all our experiments including hyperparameters and agent architectures provided in Dopamine and report them for completeness and ease of reproducibility in Table B.1. We only use hyperparameter selection over the regularization experiment αp based on results from 5 Atari games (Asterix, Seaquest, Pong, Breakout and Seaquest). We will also open source our code to further aid in reproducing our results. Evaluation Protocol. Following Agarwal et al. (2020), the Atari environments used in our experiments are stochastic due to sticky actions, i.e., there is 25% chance at every time step that the environment will execute the agent’s previous action again, instead of the agent’s new action. All agents (online or offline) are compared using the best evaluation score (averaged over 5 runs) achieved during training where the evaluation is done online every training iteration using a -greedy policy with = 0.001. We report offline training results with same hyperparameters over 5 random seeds of the DQN replay data collection, game simulator and network initialization. Offline Dataset. As suggested by Agarwal et al. (2020), we randomly subsample the DQN Replay dataset containing 50 millions transitions to create smaller offline datasets with the same data distribution as the original dataset. We use the 5% DQN replay dataset for most of our experiments. We also report results using the 20% dataset setting (4x larger) to show that our claims are also valid even when we have higher coverage over the state space. Optimizer related hyperparameters. For existing off-policy agents, step size and optimizer were taken as published. We used the DQN (Adam) algorithm for all our experiments, given its superior performance over the DQN (Nature) which uses RMSProp, as reported by Agarwal et al. (2020). Atari 2600 games used. For all our experiments in Section 3, we used the same set of 5 games as utilized by Agarwal et al. (2020); Bellemare et al. (2017) to present analytical results. For our empirical evaluation in Appendix A.5, we use the set of games employed by Fedus et al. (2020b) which are deemed suitable for offline RL by Gulcehre et al. (2020). Similar in spirit to Gulcehre et al. (2020), we use the set of 5 games used for analysis for hyperparameter tuning for offline RL methods. 5 games subset: ASTERIX, QBERT, PONG, SEAQUEST, BREAKOUT 16 game subset: In addition to 5 games above, the following 11 games: DOUBLE DUNK, JAMES BOND, MS. PACMAN, SPACE INVADERS, ZAXXON, WIZARD OF WOR, YARS’ REVENGE, ENDURO, ROAD RUNNER, BEAMRIDER, DEMON ATTACK B.2 GRIDWORLD EXPERIMENTS We use the gridworld suite from Fu et al. (2019) to obtain gridworlds for our experiments. All of our gridworld results are computed using the 16 × 16 GRID16SMOOTHOBS environment, which consists of a 256-cell grid, with walls arising randomly with a probability of 0.2. Each state allows 5 different actions (subject to hitting the boundary of the grid): move left, move right, move up, move down and no op. The goal in this environment is to minimize the cumulative discounted distance to a fixed goal location where the discount factor is given by γ = 0.95. The features for this Q-function are given by randomly chosen vectors which are smoothened spatially in a local neighborhood of a grid cell (x, y). We use a deep Q-network with two hidden layers of size (64, 64), and train it using soft Q-learning with entropy coefficient of 0.1, following the code provided by authors of Fu et al. (2019). We use a first-in-first out replay buffer of size 10000 to store past transitions. C PROOFS FOR SECTION 4.1 In this section, we provide the technical proofs from Section 4.1. We first derive a solution to optimization problem Equation 1 and show that it indeed comes out to have the form described in Equation 2. We first introduce some notation, including definition of the kernel G which was used for this proof. This proof closely follows the proof from Mobahi et al. (2020). Definitions. For any universal kernel u, the Green’s function (Duffy, 2015) of the linear kernel operator L given by: [LQ] (s,a) := ∑ (s′,a′) u((s,a), (s ′,a′))Q(s′,a′) is given by the function g((s,a), (s′,a′)) that satisfies:∑ (s,a) u((s,a), (s′,a′)) g((s′,a′), (s̄, ā)) = δ((s,a)− (s̄, ā)), (C.1) where δ is the Dirac-delta function. Thus, Green’s function can be understood as a kernel that “inverts” the universal kernel u to the identity (Dirac-delta) matrix. We can then define the matrix G as the matrix of vectors g(s,a) evaluated on the training dataset, D, however note that the functional g(s,a) can be evaluated for other state-action tuples, not present in D. G((si,ai), (sj ,aj)) := g((si,ai), (sj ,aj)) and g(s,a)[i] = g((s,a), (si,ai)) ∀(si,ai) ∈ D. (C.2) Lemma C.0.1. The solution to Equation 1 is given by Equation 2. Proof. This proof closely follows the proof of Proposition 1 from (Mobahi et al., 2020). We revisit key aspects the key parts of this proof here. We restate the optimization problem below, and solve for the optimum Qk to this equation by applying the functional derivative principle. min Q∈Q J(Q) := ∑ si,ai∈D (Q(si,ai)− yk(si,ai))2 + c ∑ (s,a) ∑ (s′,a′) u((s,a), (s′,a′))Q(s,a)Q(s′,a′). The functional derivative principle would say that the optimal Qk to this problem would satisfy, for any other function f and for a small enough ε→ 0, ∀f ∈ Q : ∂J(Qk + εf) ∂ε ∣∣∣ ε=0 = 0. (C.3) By setting the gradient of the above expression to 0, we obtain the following stationarity conditions on Qk (also denoting (si,ai) := xi) for brevity:∑ xi∈D δ(x− xi) (Qk(xi)− yk(xi)) + c ∑ x u(x,x′)Qk(x ′) = 0. (C.4) Now, we invoke the definition of the Green’s function discussed above and utilize the fact that the Dirac-delta function can be expressed in terms of the Green’s function, we obtain a simplified version of the above relation:∑ x u(x,x′) ∑ xi∈D (Qk(xi)− yk(xi))g(x′,xi) = −c ∑ x u(x,x′)Qk(x ′). (C.5) Since the kernel u(x,x′) is universal and positive definite, the optimal solution Qk(x) is given by: Qk(s,a) = − 1 c ∑ (si,ai)∈D (Qk(si,ai)− yk(si,ai)) · g((s,a), (si,ai)). (C.6) Finally we can replace the expression for residual error, Qk(si,ai) − yk(si,ai) using the green’s kernel on the training data by solving for it in closed form, which gives us the solution in Equation 2. Qk(s,a) = − 1 c gT(s,a)(Qk − yk) = g T (s,a)(cI + G) −1yk. (C.7) Next, we now state and prove a slightly stronger version of Theorem 4.1 that immediately implies the original theorem. Theorem C.1. Let S be a shorthand for S = γPπA and assume S is a normal matrix. Then there exists an infinite, strictly increasing sequence of fitting iterations, (kl)∞l=1 starting from k1 = 0, such that, for any two singular-values σi(S) and σj(S) of S with σi(S) ≤ σj(S), ∀ l ∈ N and l′ ≥ l, σi(Mkl′ ) σj(Mkl′ ) < σi(Mkl) σj(Mkl) ≤ σi(S) σj(S) . (C.8) Therefore, the effective rank of Mk satisfies: srankδ(Mkl′ ) ≤ srankδ(Mkl). Furthermore, ∀ l ∈ N and t ≥ kl, σi(Mt) σj(Mt) < σi(Mkl) σj(Mkl) +O (( σi(S) σj(S) )kl) . (C.9) Therefore, the effective rank of Mt, srankδ(Mt), outside the chosen subsequence is also controlled above by the effective rank on the subsequence (srankδ(Mkl)) ∞ l=1. To prove this theorem, we first show that for any two fitting iterations, t < t′, if St and St ′ are positive semi-definite, the ratio of singular values and the effective rank decreases from t to t′. As an immediate consequence, this shows that when S is positive semi-definite, the effective rank decreases at every iteration, i.e., by setting kl = l (Corollary C.1.1). To extend the proof to arbitrary normal matrices, we show that for any S, a sequence of fitting iterations (kl)∞l=1 can be chosen such that S kl is (approximately) positive semi-definite. For this subsequence of fitting iterations, the ratio of singular values and effective rank also decreases. Finally, to control the ratio and effective rank on fitting iterations t outside this subsequence, we construct an upper bound on the ratio f(t): σi(Mt)σj(Mt) < f(t), and relate this bound to the ratio of singular values on the chosen subsequence. Lemma C.1.1 (srankδ(Mk) decreases when Sk is PSD.). Let S be a shorthand for S = γPπA and assume S is a normal matrix. Choose any t, t′ ∈ N such that t < t′. If St and St′ are positive semi-definite, then for any two singular-values σi(S) and σj(S) of S, such that 0 < σi(S) < σj(S): σi(Mt′) σj(Mt′) < σi(Mt) σj(Mt) ≤ σi(S) σj(S) . (C.10) Hence, the effective rank of Mk decreases from t to t′: srankδ(Mt′) ≤ srankδ(Mt). Proof. First note that Mk is given by: Mk := k∑ i=1 γk−i(PπA)k−i = k∑ i=1 Sk−i. (C.11) From hereon, we omit the leading γ term since it is a constant scaling factor that does not affect ratio or effective rank. Almost every matrix S admits a complex orthogonal eigendecomposition. Thus, we can write S := Uλ(S)UH . And any power of S, i.e., , Si can be expressed as: Si = Uλ(S)iUH , and hence, we can express Mk as: Mk := U ( k−1∑ i=0 λ(S)i ) UH = U · diag ( 1− λ(S)k 1− λ(S) ) · UH . (C.12) Since S is normal, its eigenvalues and singular values are further related as σk(S) = |λk(S)|. And this also means that Mk is normal, indicating that σi(Mk) = |λi(Mk)|. Thus, the singular values of Mk can be expressed as σi(Mk) := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.13) When Sk is positive semi-definite, λi(S)k = σi(S)k, enabling the following simplification: σi(Mk) = |1− σi(S)k| |1− λi(S)| . (C.14) To show that the ratio of singular values decreases from t to t′, we need to show that f(σ) = |1−σ t′ | |1−σt| is an increasing function of σ when t′ > t. It can be seen that this is the case, which implies the desired result. To further show that srankδ(Mt) ≥ srankδ(Mt′), we can simply show that ∀i ∈ [1, · · · , n], hk(i) := ∑i j=1 σj(Mk)∑n j=1 σj(Mk) increases with k, and this would imply that the srankδ(Mk) cannot increase from k = t to k = t′. We can decompose hk(i) as: hk(i) = i∑ j=1 σj(Mk)∑ l σl(Mk) = 1 1 + ∑n j=i+1 σj(Mk)∑i j=1 σj(Mk) . (C.15) Since σj(Mk)/σl(Mi) decreases over time k for all j, l if σj(S) ≤ σl(S), the ratio in the denominator of hk(i) decreases with increasing k implying that hk(i) increases from t to t′. Corollary C.1.1 (srankδ(Mk) decreases for PSD S matrices.). Let S be a shorthand for S = γPπA. Assuming that S is positive semi-definite, for any k, t ∈ N, such that t > k and that for any two singular-values σi(S) and σj(S) of S, such that σi(S) < σj(S), σi(Mt) σj(Mt) < σi(Mk) σj(Mk) ≤ σi(S) σj(S) . (C.16) Hence, the effective rank of Mk decreases with more fitting iterations: srankδ(Mt) ≤ srankδ(Mk). In order to now extend the result to arbitrary normal matrices, we must construct a subsequence of fitting iterations (kl)∞l=1 where S kl is (approximately) positive semi-definite. To do so, we first prove a technical lemma that shows that rational numbers, i.e., numbers that can be expressed as r = pq , for integers p, q ∈ Z are “dense” in the space of real numbers. Lemma C.1.2 (Rational numbers are dense in the real space.). For any real number α, there exist infinitely many rational numbers pq such that α can be approximated by p q upto 1 q2 accuracy.∣∣∣∣α− pq ∣∣∣∣ ≤ 1q2 . (C.17) Proof. We first use Dirichlet’s approximation theorem (see Hlawka et al. (1991) for a proof of this result using a pigeonhole argument and extensions) to obtain that for any real numbers α andN ≥ 1, there exist integers p and q such that 1 ≤ q ≤ N and, |qα− p| ≤ 1 |N |+ 1 < 1 N . (C.18) Now, since q ≥ 1 > 0, we can divide both sides by q, to obtain:∣∣∣∣α− pq ∣∣∣∣ ≤ 1Nq ≤ 1q2 . (C.19) To obtain infinitely many choices for pq , we observe that Dirichlet’s lemma is valid only for all values of N that satisfy N ≤ 1|qα−p| . Thus if we choose an N ′ such that N ′ ≥ Nmax where Nmax is defined as: Nmax = max { 1 |q′α− p′| ∣∣∣ p′, q′ ∈ Z, 1 ≤ q′ ≤ q} . (C.20) Equation C.20 essentially finds a new value of N , such that the current choices of p and q, which were valid for the first value ofN do not satisfy the approximation error bound. Applying Dirichlet’s lemma to this new value of N ′ hence gives us a new set of p′ and q′ which satisfy the 1q′2 approximation error bound. Repeating this process gives us countably many choices of (p, q) pairs that satisfy the approximation error bound. As a result, rational numbers are dense in the space of real numbers, since for any arbitrarily chosen approximation accuracy given by 1q2 , we can obtain atleast one rational number, pq which is closer to α than 1 q2 . This proof is based on Johnson (2016). Now we utilize Lemmas C.1.1 and C.1.2 to prove Proposition 4.1. Proof of Proposition 4.1 and Theorem C.1 Recall from the proof of Lemma C.1.1 that the singular values of Mk are given by: σi(Mk) := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.21) Bound on Singular Value Ratio: The ratio between σi(Mk) and σj(Mk) can be expressed as σi(Mk) σj(Mk) = ∣∣∣∣ 1− λi(S)k1− λj(S)k ∣∣∣∣ ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ . (C.22) For a normal matrix S, σi(S) = |λi(S)|, so this ratio can be bounded above as σi(Mk) σj(Mk) ≤ 1 + σi(S) k |1− σj(S)k| ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ . (C.23) Defining f(k) to be the right hand side of the equation, we can verify that f is a monotonically decreasing function in k when σi < σj . This shows that this ratio of singular values in bounded above and in general, must decrease towards some limit limk→∞ f(k). Construction of Subsequence: We now show that there exists a subsequence (kl)∞l=1 for which Skl is approximately positive semi-definite. For ease of notation, let’s represent the i-th eigenvalue as λi(S) = |λi(S)| · eiθi , where θi > 0 is the polar angle of the complex value λi(s) and |λi(S)| is its magnitude (norm). Now, using Lemma C.1.2, we can approximate any polar angle, θi using a rational approximation, i.e., , we apply lemma C.1.2 on θi2π ∃ pi, qi ∈ N, s.t. ∣∣∣∣ θi2π − piqi ∣∣∣∣ ≤ 1q2i . (C.24) Since the choice of qi is within our control we can estimate θi for all eigenvalues λi(S) to infinitesimal accuracy. Hence, we can approximate θi ≈ 2π piqi . We will now use this approximation to construct an infinite sequence (kl)∞l=1, shown below: kl = l · LCM(q1, · · · , qn) ∀ j ∈ N, (C.25) where LCM is the least-common-multiple of natural numbers q1, · · · qn. In the absence of any approximation error in θi, we note that for any i and for any l ∈ N as defined above, λi(S)kl = |λi(S)|kl · exp ( 2iπ · piqi · kl ) = |λi(S)|kl , since the polar angle for any kl is going to be a multiple of 2π, and hence it would fall on the real line. As a result, Skl will be positive semi-definite, since all eigenvalues are positive and real. Now by using the proof for Lemma C.1.1, we obtain the ratio of i and j singular values are increasing over the sequence of iterations (kj)∞j=1. Since the approximation error in θi can be controlled to be infinitesimally small to prevent any increase in the value of srankδ due to it (this can be done given the discrete form of srankδ), we note that the above argument applies even with the approximation, proving the required result on the subsequence. Controlling All Fitting Iterations using Subsequence: We now relate the ratio of singular values within this chosen subsequence to the ratio of singular values elsewhere. Choose t, l ∈ N such that t > kl. Earlier in this proof, we showed that the ratio between singular values is bounded above by a monotonically decreasing function f(t), so σi(Mt) σj(Mt) ≤ f(t) < f(kl). (C.26) Now, we show that that f(kl) is in fact very close to the ratio of singular values: f(kl) = |1− σi(S)kl | |1− σj(S)kl | ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ ≤ σi(Mt)σj(Mt) + 2σi(S) kl |1− σj(S)kl | ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣. (C.27) The second term goes to zero as kl increases; algebraic manipulation shows that this gap be bounded by f(kl) ≤ σi(Mkl) σj(Mkl) + ( σi(S) σj(S) )kl 2σj(S) |1− σj(S)| ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣︸ ︷︷ ︸ constant . (C.28) Putting these inequalities together proves the final statement, σi(Mt) σj(Mt) ≤ σi(Mkl) σj(Mkl) +O (( σi(S) σj(S) )kl) . (C.29) Extension to approximately-normal S. We can extend the result in Theorem C.1 (and hence also Theorem 4.1) to approximately-normal S. Note that the main requirement for normality of S (i.e., σi(S) = |λi(s)|) is because it is straightforward to relate the eigenvalue of S to M as shown below. |λi(Mk)| := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.30) Now, since the matrix S is approximately normal, we can express it using its Schur’s triangular form as, S = U · (Λ + N) ·UH , where Λ is a diagonal matrix and N is an “offset” matrix. The departure from normality of S is defined as: ∆(S) := infN ||N||2, where the infimum is computed over all matrices N that can appear in the Schur triangular form for S. For a normal S only a single value of N = 0 satisfies the Schur’s triangular form. For an approximately normal matrix S, ||N||2 ≤ ∆(S) ≤ ε, for a small ε. Furthermore note that from Equation 6 in Ruhe (1975), we obtain that |σi(S)− |λi(S)|| ≤ ∆(S) ≤ ε, (C.31) implying that singular values and norm-eigenvalues are close to each other for S. Next, let us evaluate the departure from normality of Mk. First note that, Sj = U · (Λ +N)j ·UH , and so, Mk = U · (∑k j=1(Λ + N) j ) ·UH and if ||N||2 ≤ ε, for a small epsilon (i.e., considering only terms that are linear in N for (Λ + N)j), we note that: |σi(Mk)− |λi(Mk)|| ≤ k∑ j=1 j · |λ1(S)|j−1∆(S) ≤ 1 (1− |λ1(S)|)2 · ε. (C.32) Thus, the matrix Mk is also approximately normal provided that the max eigenvalue norm of S is less than 1. This is true, since S = γPπA (see Theorem 4.1, where both Pπ and A have eigenvalues less than 1, and γ < 1. Given that we have shown that Mk is approximately normal, we can show that srankδ(Mk) only differs from srankδ,λ(Mk), i.e., , the effective rank of eigenvalues, in a bounded amount. If the value of ε is then small enough, we still retain the conclusion that srankδ(Mk) generally decreases with more training by following the proof of Theorem C.1. D PROOFS FOR SECTION 4.2 In this section, we provide technical proofs from Section 4.2. We start by deriving properties of optimization trajectories of the weight matrices of the deep linear network similar to Arora et al. (2018) but customized to our set of assumptions, then prove Proposition 4.1, and finally discuss how to extend these results to the fitted Q-iteration setting and some extensions not discussed in the main paper. Similar to Section 4.1, we assume access to a dataset of transitions, D = {(si,ai, r(si,ai), s′i} in this section, and assume that the same data is used to re-train the function. Notation and Definitions. The Q-function is represented using a deep linear network with at least 3 layers, such that Q(s,a) = WNWN−1 · · ·W1[s;a], where N ≥ 3,WN ∈ R1×dN−1 , (D.1) and Wi ∈ Rdi×di−1 for i = 1, . . . , N − 1. We index the weight matrices by a tuple (k, t): Wj(k, t) denotes the weight matrix Wj at the t-th step of gradient descent during the k-th fitting iteration (Algorithm 1). Let the end-to-end weight matrix WNWN−1 · · ·W1 be denoted shorthand as WN :1, and let the features of the penultimate layer of the network, be denoted as Wφ(k, t) := WN−1(k, t) · · ·W1(k, t). Wφ(k, t) is the matrix that maps an input [s;a] to corresponding features Φ(s,a). In our analysis, it is sufficient to consider the effective rank of Wφ(k, t) since the features Φ are given by: Φ(k, t) = Wφ(k, t)[S;A], which indicates that: rank(Φ(k, t)) = rank(Wφ(k, t)[S;A]) ≤ min (rank(Wφ(k, t)), rank([S;A])) . Assuming the state-action space has full rank, we are only concerned about rank(Wφ(k, t)) which justifies our choice for analyzing srankδ(Wφ(k, t)). Let Lk+1(WN :1(k, t)) denote the mean squared Bellman error optimization objective in the k-th fitting iteration. Lk+1(WN :1(k, t)) = |D|∑ i=1 (WN (k, t)Wφ(k, t)[si;ai]− yk(si,ai))2 , where yk = R + γPπQk. When gradient descent is used to update the weight matrix, the updates to Wi(k, t) are given by: Wj(k, t+ 1)←Wj(k, t)− η ∂Lk+1(WN :1(k, t)) ∂Wj(k, t) . If the learning rate η is small, we can approximate this discrete time process with a continuous-time differential equation, which we will use for our analysis. We use Ẇ (k, t) to denote the derivative of W (k, t) with respect to t, for a given k. Ẇj(k, t) = −η ∂Lk+1(WN :1(k, t)) ∂Wj(k, t) (D.2) In order to quantify the evolution of singular values of the weight matrix, Wφ(k, t), we start by quantifying the evolution of the weight matrix Wφ(k, t) using a more interpretable differential equation. In order to do so, we make an assumption similar to but not identical as Arora et al. (
1. What is the main contribution of the paper regarding rank collapse in reinforcement learning? 2. What are the concerns regarding the experimental findings and theoretical claims of the paper? 3. How does the paper deduce that "rank collapses in data-efficient RL"? 4. Can the authors explain the discrepancy between the TD error and returns in Figure 3b (Seaquest)? 5. Why is Theorem 4.1 not a direct application of Theorem 5 of Mobahi et al., 2020? 6. Is minimizing TD^2 an appropriate way to find the fixed point of the Bellman iteration when using function approximation? 7. What is the significance of the assumption that zeta is small enough in the proof of the argument for the boostrapped updates in Theorem 4.2? 8. Does the paper provide sufficient evidence to support its claim that rank collapse is the cause of degradation of performance?
Review
Review This paper discusses a phenomenon wherein the feature vectors of the learned value function in reinforcement learning (RL) lose their diversity as training progresses. The paper analyzes the rank of the final hidden layer in the model parameterizing the value function and shows experimentally that for offline-RL and online-RL setups on Atari and Gym benchmarks, this rank collapse occurs with a drop in the average return. The paper further develops two models for understanding this phenomenon, (i) where the value function is modeled using the neural tangent kernel, and (ii) where the value function is modeled using a deep linear network. The paper argues that bootstrapping results in reduction of the rank of the feature matrix as training progresses for these models. A regularization term that equalizes the singular values of the feature matrix is used to mitigate this rank collapse and experimental results on Atari benchmarks are shown with this regularizer. The main claim of this paper is to identify the phenomenon of rank collapse of the feature matrix. I have concerns about the experimental findings of this paper and correctness of its theoretical claims, which are discussed below. I am willing to increase my score if the authors can convincingly argue otherwise. Broadly, I agree this is an interesting direction but current manuscript does not convince the reader that rank collapse is indeed the cause of degradation of performance. Comments. Figure 1 does not completely validate the claims on page 3. In Asterix, increasing the amount of data does not lead to rank collapse but the returns degrade significantly during training, why? In Seaquest, the returns (blue) have degraded essentially to zero even when the rank (blue) is at its maximum. This suggests that there are other factors which are causing the drop in performance instead of/in addition to the rank. The trends in Appendix A1 are similarly inconsistent, as is Figure 2 (Ant-v2). The implication “if low rank, then low returns” is reasonable to expect due to reduced capacity of the value function approximation. But how do the authors deduce from these experiments that “rank collapses in data-efficient RL” (first sentence of Section 3.1). I have a similar concern about Fig. 3b (Seaquest). The rank for n=4 gradient steps/transition clearly collapses, yet the TD error remains small, and yet the returns are quite bad. If rank collapse entails that the TD error is not minimized well-enough, and that is the cause of the drop in returns, then how can one explain this figure? I suspect the discrepancy is because the TD error is used in Fig. 3b. Can you perhaps compute a pseudo-optimal policy using a good RL method (say Rainbow) for Seaquest and use its value function as the surrogate for Q*? The narrative will benefit from being more precise. There is an egregiously large number of sentences where the word “implicit” (the paper uses this word 37 times in the first 8 pages) is used in a vague manner (see for instance Definition 1). Further, “implicit under-parametrization” a bad monicker, should the lottery ticket hypothesis be also called implicit under-parametrization? Why is Theorem 4.1 here not a direct application of Theorem 5 of Mobahi et al., 2020? Further, the big intellectual gap in the argument is that while we are trying to find the fixed point of the Bellman equation in RL, there is no such fixed point in kernel regression. So while the argument that self-distillation during iterative TD^2-minimization may cause a loss of diversity of the feature space, it does not seem to the only reason, after all some examples in Fig 3 do not show rank collapse. Perhaps the underlying problem is really that minimizing TD^2 is not an appropriate way to find the fixed point of the Bellman iteration when using function approximation. Indeed, if the TD error is small (Fig. 3b, n=4), there is nothing the network can do to improve the returns. TD error is small in this case in spite of the feature matrix having low rank; it indeed depends on the complexity of the value function. The development in Sec 4.2 using the work of Arora et a., 2019 around eq. (5) argues that when Q_k(s,a) = Q_{k+1}(s,a) for all pairs (s,a) you get rank collapse; this is a very special situation where the value function at each (s,a) is essentially proportional to the rewards at that state-action pair. I tried to follow the proof of the argument for the botostrapped updates in Theorem 4.2 but to my understanding it hides this same issue, e.g., in eq. (D.15) it is assumed that zeta is small enough which is not true. By this argument simply rescaling all the rewards to have small magnitude should result in rank collapse.
ICLR
Title Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning Abstract We identify an implicit under-parameterization phenomenon in value-based deep RL methods that use bootstrapping: when value functions, approximated using deep neural networks, are trained with gradient descent using iterated regression onto target values generated by previous instances of the value network, more gradient updates decrease the expressivity of the current value network. We characterize this loss of expressivity via a drop in the rank of the learned value network features, and show that this typically corresponds to a performance drop. We demonstrate this phenomenon on Atari and Gym benchmarks, in both offline and online RL settings. We formally analyze this phenomenon and show that it results from a pathological interaction between bootstrapping and gradient-based optimization. We further show that mitigating implicit under-parameterization by controlling rank collapse can improve performance. 1 INTRODUCTION Many pervasive deep reinforcement learning (RL) algorithms estimate value functions using bootstrapping, that is, by sequentially fitting value functions to target value estimates generated from the value function learned in the previous iteration. Despite high-profile achievements (Silver et al., 2017), these algorithms are highly unreliable due to poorly understood optimization issues. Although a number of hypotheses have been proposed to explain these issues (Achiam et al., 2019; Bengio et al., 2020; Fu et al., 2019; Igl et al., 2020; Liu et al., 2018; Kumar et al., 2020a), a complete understanding remains elusive. We identify an “implicit under-parameterization” phenomenon that emerges when value networks are trained using gradient descent combined with bootstrapping. This phenomenon manifests as an excessive aliasing of features learned by the value network across states, which is exacerbated with more gradient updates. While the supervised deep learning literature suggests that some feature aliasing is desirable for generalization (e.g., Gunasekar et al., 2017; Arora et al., 2019), implicit under-parameterization exhibits more pronounced aliasing than in supervised learning. This over-aliasing causes an otherwise expressive value network to implicitly behave as an under-parameterized network, often resulting in poor performance. Implicit under-parameterization becomes aggravated when the rate of data re-use is increased, restricting the sample efficiency of deep RL methods. In online RL, increasing the number of gradient steps in between data collection steps for data-efficient RL (Fu et al., 2019; Fedus et al., 2020b) causes the problem to emerge more frequently. In the extreme case when no additional data is collected, referred to as offline RL (Lange et al., 2012; Agarwal et al., 2020; Levine et al., 2020), implicit under-parameterization manifests consistently, limiting the viability of offline methods. We demonstrate the existence of implicit under-parameterization in common value-based deep RL methods, including Q-learning (Mnih et al., 2015; Hessel et al., 2018) and actor-critic (Haarnoja et al., 2018), as well as neural fitted-Q iteration (Riedmiller, 2005; Ernst et al., 2005). To isolate the issue, we study the effective rank of the features in the penultimate layer of the value network (Section 3). We observe that after an initial learning period, the rank of the learned features drops steeply. As the rank decreases, the ability of the features to fit subsequent target values and the optimal value function generally deteriorates and results in a sharp decrease in performance (Section 3.1). ∗Equal Contribution. Correspondence to Aviral Kumar < [email protected] > and Rishabh Agarwal < [email protected] >. To better understand the emergence of implicit under-parameterization, we formally study the dynamics of Q-learning under two distinct models of neural net behavior (Section 4): kernel regression (Jacot et al., 2018; Mobahi et al., 2020) and deep linear networks (Arora et al., 2018). We corroborate the existence of this phenomenon in both models, and show that implicit underparameterization stems from a pathological interaction between bootstrapping and the implicit regularization of gradient descent. Since value networks are trained to regress towards targets generated by a previous version of the same model, this leads to a sequence of value networks of potentially decreasing expressivity, which can result in degenerate behavior and a drop in performance. The main contribution of this work is the identification of implicit under-parameterization in deep RL methods that use bootstrapping. Empirically, we demonstrate a collapse in the rank of the learned features during training, and show it typically corresponds to a drop in performance in the Atari (Bellemare et al., 2013) and continuous control Gym (Brockman et al., 2016) benchmarks in both the offline and data-efficient online RL settings. We verify the emergence of this phenomenon theoretically and characterize settings where implicit under-parameterization can emerge. We then show that mitigating this phenomenon via a simple penalty on the singular values of the learned features improves performance of value-based RL methods in the offline setting on Atari. 2 PRELIMINARIES The goal in RL is to maximize long-term discounted reward in a Markov decision process (MDP), defined as a tuple (S,A, R, P, γ) (Puterman, 1994), with state space S, action space A, a reward function R(s,a), transition dynamics P (s′|s,a) and a discount factor γ ∈ [0, 1). The Q-function Qπ(s,a) for a policy π(a|s), is the expected long-term discounted reward obtained by executing action a at state s and following π(a|s) thereafter, Qπ(s,a) := E [ ∑∞ t=0 γ tR(st,at)]. Qπ(s,a) is the fixed point of the Bellman operator T π , ∀s,a: T πQ(s,a) := R(s,a) + γEs′∼P (·|s,a),a′∼π(·|s′) [Q(s′,a′)], which can be written in vector form as: Qπ = R + γPπQπ . The optimal Q-function, Q∗(s,a), is the fixed point of the Bellman optimality operator T : T Q(s,a) := R(s,a) + γEs′∼P (·|s,a) [maxa′ Q(s′,a′)]. Practical Q-learning methods (e.g., Mnih et al., 2015; Hessel et al., 2018; Haarnoja et al., 2018) convert the Bellman equation into an bootstrapping-based objective for training a Q-network, Qθ, via gradient descent. This objective, known as mean-squared temporal difference (TD) error, is given by: L(θ) = ∑ s,a ( R(s,a) + γQ̄θ(s ′,a′)−Q(s,a) )2 , where Q̄θ is a delayed copy of the Q-function, typically referred to as the target network. These methods train Q-networks via gradient descent and slowly update the target network via Polyak averaging on its parameters. We refer the output of the penultimate layer of the deep Q-network as the learned feature matrix Φ, such that Q(s,a) = wTΦ(s,a), where w ∈ Rd and Φ ∈ R|S||A|×d. Algorithm 1 Fitted Q-Iteration (FQI) 1: Initialize Q-network Qθ , buffer µ. 2: for fitting iteration k in {1, . . . , N} do 3: Compute Qθ(s,a) and target values yk(s,a) = r + γmaxa′ Qk−1(s ′,a′) on {(s,a)} ∼ µ for training 4: Minimize TD error for Qθ via t = 1, · · · , T gradient descent updates, minθ (Qθ(s,a)− yk)2 5: end for For simplicity of analysis, we abstract deep Q-learning methods into a generic fitted Q-iteration (FQI) framework (Ernst et al., 2005). We refer to FQI with neural nets as neural FQI (Riedmiller, 2005). In the k-th fitting iteration, FQI trains the Q-function, Qk, to match the target values, yk = R+γPπQk−1 generated using previous Q-function, Qk−1 (Algorithm 1). Practical methods can be instantiated as variants of FQI, with different target update styles, different optimizers, etc. 3 IMPLICIT UNDER-PARAMETERIZATION IN DEEP Q-LEARNING In this section, we empirically demonstrate the existence of implicit under-parameterization in deep RL methods that use bootstrapping. We characterize implicit under-parameterization in terms of the effective rank (Yang et al., 2019) of the features learned by a Q-network. The effective rank of the feature matrix Φ, for a threshold δ (we choose δ = 0.01), denoted as srankδ(Φ), is given by srankδ(Φ) = min { k : ∑k i=1 σi(Φ)∑d i=1 σi(Φ) ≥ 1− δ } , where {σi(Φ)} are the singular values of Φ in decreasing order, i.e., σ1 ≥ · · · ≥ σd ≥ 0. Intuitively, srankδ(Φ) represents the number of “effective” unique components of the feature matrix Φ that form the basis for linearly approximating the Qvalues. When the network maps different states to orthogonal feature vectors, then srankδ(Φ) has high values close to d. When the network “aliases” state-action pairs by mapping them to a smaller subspace, Φ has only a few active singular directions, and srankδ(Φ) takes on a small value. Definition 1. Implicit under-parameterization refers to a reduction in the effective rank of the features, srankδ(Φ), that occurs implicitly as a by-product of learning deep neural network Q-functions. While rank decrease also occurs in supervised learning, it is usually beneficial for obtaining generalizable solutions (Gunasekar et al., 2017; Arora et al., 2019). However, we will show that in deep Q-learning, an interaction between bootstrapping and gradient descent can lead to more aggressive rank reduction (or rank collapse), which can hurt performance. Experimental setup. To study implicit under-parameterization empirically, we compute srankδ(Φ) on a minibatch of state-action pairs sampled i.i.d. from the training data (i.e., the dataset in the offline setting, and the replay buffer in the online setting). We investigate offline and online RL settings on benchmarks including Atari games (Bellemare et al., 2013) and Gym environments (Brockman et al., 2016). We also utilize gridworlds described by Fu et al. (2019) to compare the learned Q-function against the oracle solution computed using tabular value iteration. We evaluate DQN (Mnih et al., 2015) on gridworld and Atari and SAC (Haarnoja et al., 2018) on Gym domains. Data-efficient offline RL. In offline RL, our goal is to learn effective policies by performing Qlearning on a fixed dataset of transitions. We investigate the presence of rank collapse when deep Q-learning is used with broad state coverage offline datasets from Agarwal et al. (2020). In the top row of Figure 2, we show that after an initial learning period, srankδ(Φ) decreases in all domains (Atari, Gym and the gridworld). The final value of srankδ(Φ) is often quite small – e.g., in Atari, only 20-100 singular components are active for 512-dimensional features, implying significant underutilization of network capacity. Since under-parameterization is implicitly induced by the learning process, even high-capacity value networks behave as low-capacity networks as more training is performed with a bootstrapped objective (e.g., mean squared TD error). On the gridworld environment, regressing toQ∗ using supervised regression results in a much higher srankδ(Φ) (black dashed line in Figure 2(left)) than when using neural FQI. On Atari, even when a 4x larger offline dataset with much broader coverage is used (blue line in Figure 2), rank collapse still persists, indicating that implicit under-parameterization is not due to limited offline dataset size. Figure 2 (2nd row) illustrates that policy performance generally deteriorates as srank(Φ) drops, and eventually collapses simultaneously with the rank collapse. While we do not claim that implicit under-parameterization is the only issue in deep Q-learning, the results in Figure 2 show that the emergence of this under-parameterization is strongly associated with poor performance. To prevent confounding from the distribution mismatch between the learned policy and the offline dataset, which often affects the performance of Q-learning methods, we also study CQL (Kumar et al., 2020b), an offline RL algorithm designed to handle distribution mismatch. We find a similar degradation in effective rank and performance for CQL (Figure A.3), implying that underparameterization does not stem from distribution mismatch and arises even when the resulting policy is within the behavior distribution (though the policy may not be exactly pick actions observed in the dataset). We provide more evidence in Atari and Gym domains in Appendix A.1. Data-efficient online RL. Deep Q-learning methods typically use very few gradient updates (n) per environment step (e.g., DQN takes 1 update every 4 steps on Atari, n = 0.25). Improving the sample efficiency of these methods requires increasing n to utilize the replay data more effectively. However, we find that using larger values of n results in higher levels of rank collapse as well as performance degradation. In the top row of Figure 3, we show that larger values of n lead to a more aggressive drop in srankδ(Φ) (red vs. blue/orange lines), and that rank continues to decrease with more training. Furthermore, the bottom row illustrates that larger values of n result in worse performance, corroborating Fu et al. (2019); Fedus et al. (2020b). We find similar results with the Rainbow algorithm (Hessel et al., 2018) (Appendix A.2). As in the offline setting, directly regressing to Q∗ via supervised learning does not cause rank collapse (black line in Figure 3). 3.1 UNDERSTANDING IMPLICIT UNDER-PARAMETERIZATION AND ITS IMPLICATIONS How does implicit under-parameterization degrade performance? Having established the presence of rank collapse in data-efficient RL, we now discuss how it can adversely affect performance. As the effective rank of the network features Φ decreases, so does the network’s ability to fit the subsequent target values, and eventually results in inability to fit Q∗. In the gridworld domain, we measure this loss of expressivity by measuring the error in fitting oracle-computed Q∗ values via a linear transformation of Φ. When rank collapse occurs, the error in fitting Q∗ steadily increases during training, and the consequent network is not able to predict Q∗ at all by the end of training (Figure 4a) – this entails a drop in performance. In Atari domains, we do not have access to Q∗, and so we instead measure TD error, that is, the error in fitting the target value estimates, R + γPπQk. In SEAQUEST, as rank decreases, the TD error increases (Figure 4b) and the value function is unable to fit the target values, culminating in a performance plateau (Figure 3). This observation is consistent across other environments; we present further supporting evidence in Appendix A.4. Does bootstrapping cause implicit under-parameterization? We perform a number of controlled experiments in the gridworld and Atari environments to isolate the connection between rank collapse and bootstrapping. We first remove confounding issues of poor network initialization (Fedus et al., 2020a) and non-stationarity (Igl et al., 2020) by showing that rank collapse occurs even when the Q-network is re-initialized from scratch at the start of each fitting iteration (Figure 4c). To show that the problem is not isolated to the control setting, we show evidence of rank collapse in the policy evaluation setting as well. We trained a value network using fitted Q-evaluation for a fixed policy π (i.e., using the Bellman operator T π instead of T ), and found that rank drop still occurs (FQE in Figure 4d). Finally, we show that by removing bootstrapped updates and instead regressing directly to Monte-Carlo (MC) estimates of the value, the effective rank does not collapse (MC Returns in Figure 4d). These results, along with similar findings on other Atari environments (Appendix A.3), our analysis indicates that bootstrapping is at the core of implicit under-parameterization. 4 THEORETICAL ANALYSIS OF IMPLICIT UNDER-PARAMETERIZATION In this section, we formally analyze implicit under-parameterization and prove that training neural networks with bootstrapping reduces the effective rank of the Q-network, corroborating the empirical observations in the previous section. We focus on policy evaluation (Figure 4d and Figure A.9), where we aim to learn a Q-function that satisfies Q = R+γPπQ for a fixed π, for ease of analysis. We also presume a fixed dataset of transitions, D, to learn the Q-function. 4.1 ANALYSIS VIA KERNEL REGRESSION We first study bootstrapping with neural networks through a mathematical abstraction that treats the Q-network as a kernel machine, following the neural tangent kernel (NTK) formalism (Jacot et al., 2018). Building on prior analysis of self-distillation (Mobahi et al., 2020), we assume that each iteration of bootstrapping, the Q-function optimizes the squared TD error to target labels yk with a kernel regularizer. This regularizer captures the inductive bias from gradient-based optimization of TD error and resembles the regularization imposed by gradient descent under NTK (Mobahi et al., 2020). The error is computed on (si,ai) ∈ D whereas the regularization imposed by a universal kernel u with a coefficient of c ≥ 0 is applied to the Q-values at all state-action pairs as shown in Equation 1. We consider a setting c > 0 for all rounds of bootstrapping, which corresponds to the solution obtained by performing gradient descent on TD error for a small number of iterations with early stopping in each round (Suggala et al., 2018) and thus, resembles how the updates in Algorithm 1 are typically implemented in practice. Qk+1 ← arg min Q∈Q ∑ si,ai∈D (Q(si,ai)− yk(si,ai))2 + c ∑ (s,a) ∑ (s′,a′) u((s,a), (s′,a′))Q(s,a)Q(s′,a′). (1) The solution to Equation 1 can be expressed as Qk+1(s,a) = gT(s,a)(cI + G) −1yk, where G is the Gram matrix for a special positive-definite kernel (Duffy, 2015) and g(s,a) denotes the row of G corresponding to the input (s,a) (Mobahi et al., 2020, Proposition 1). A detailed proof is in Appendix C. When combined with the fitted Q-iteration recursion, setting labels yk = R + γPπQk−1, we recover a recurrence that relates subsequent value function iterates Qk+1 = G(cI + G) −1yk = G(cI + G) −1︸ ︷︷ ︸ A [R + γPπQk] = A (∑k i=1 γ k−i (PπA) k−i ) R := AMkR. (2) Equation 2 follows from unrolling the recurrence and setting the algorithm-agnostic initial Q-value, Q0, to be 0. We now show that the sparsity of singular values of the matrix Mk generally increases over fitting iterations, implying that the effective rank of Mk diminishes with more iterations. For this result, we assume that the matrix S is normal, i.e., the norm of the (complex) eigenvalues of S is equal to its singular values. We will discuss how this assumption can be relaxed in Appendix A.7. Theorem 4.1. Let S be a shorthand for S = γPπA and assume S is a normal matrix. Then there exists an infinite, strictly increasing sequence of fitting iterations, (kl)∞l=1 starting from k1 = 0, such that, for any two singular-values σi(S) and σj(S) of S with σi(S) < σj(S), ∀ l ∈ N and l′ ≥ l, σi(Mkl′ ) σj(Mkl′ ) < σi(Mkl) σj(Mkl) ≤ σi(S) σj(S) . (3) Hence, srankδ(Mkl′ ) ≤ srankδ(Mkl). Moreover, if S is positive semi-definite, then (kl) ∞ l=1 = N, i.e., srank continuously decreases in each fitting iteration. We provide a proof of the theorem above as well as present a stronger variant that shows a gradual decrease in the effective rank for fitting iterations outside this infinite sequence in Appendix C. As k increases along the sequence of iterations given by k = (kl)∞l=1, the effective rank of the matrix Mk drops, leading to low expressivity of this matrix. Since Mk linearly maps rewards to the Qfunction (Equation 2), drop in expressivity results of Mk in the inability to model the actual Qπ . Summary of our analysis. Our analysis of bootstrapping and gradient descent from the view of regularized kernel regression suggests that rank drop happens with more training (i.e., with more rounds of bootstrapping). In contrast to self-distillation (Mobahi et al., 2020), rank drop may not happen in every iteration (and rank may increase between two consecutive iterations occasionally), but srankδ exhibits a generally decreasing trend. 4.2 ANALYSIS WITH DEEP LINEAR NETWORKS UNDER GRADIENT DESCENT While Section 4.1 demonstrates rank collapse will occur in a kernel-regression model of Q-learning, it does not illustrate when the rank collapse occurs. To better specify a point in training when rank collapse emerges, we present a complementary derivation for the case when the Q-function is represented as a deep linear neural network (Arora et al., 2019), which is a widely-studied setting for analyzing implicit regularization of gradient descent in supervised learning (Gunasekar et al., 2017; 2018; Arora et al., 2018; 2019). Our analysis will show that rank collapse can emerge as the generated target values begin to approach the previous value estimate, in particular, when in the vicinity of the optimal Q-function. Proof strategy. Our proof consists of two steps: (1) We show that the effective rank of the feature matrix decreases within one fitting iteration (for a given target value) due to the low-rank affinity, (2) We show that this effective rank drop is “compounded” as we train using a bootstrapped objective. Proposition 4.1 explains (1) and Proposition 4.2, Theorem 4.2 and Appendix D.2 discuss (2). Additional notation and assumptions. We represent the Q-function as a deep linear network with at ≥ 3 layers, such that Q(s,a) = WNWφ[s;a], where N ≥ 3, WN ∈ R1×dN−1 and Wφ = WN−1WN−2 · · ·W1 with Wi ∈ Rdi×di−1 for i = 1, . . . , N − 1. Wφ maps an input [s;a] to corresponding penultimate layer’ features Φ(s,a). Let Wj(k, t) denotes the weight matrix Wj at the t-th step of gradient descent during the k-th fitting iteration (Algorithm 1). We define Wk,t = WN (k, t)Wφ(k, t) and LN,k+1(Wk,t) as the TD error objective in the k-th fitting iteration. We study srankδ(Wφ(k, t)) since the rank of features Φ = Wφ(k, t)[S,A] is equal to rank of Wφ(k, t) provided the state-action inputs have high rank. We assume that the evolution of the weights is governed by a continuous-time differential equation (Arora et al., 2018) within each fitting iteration k. To simplify analysis, we also assume that all except the last-layer weights follow a “balancedness” property (Equation D.4), which suggests that the weight matrices in the consecutive layers in the deep linear network share the same singular values (but with different permutations). However, note that we do not assume balancedness for the last layer which trivially leads to rank-1 features, making our analysis strictly more general than conventionally studied deep linear networks. In this model, we can characterize the evolution of the singular values of the feature matrix Wφ(k, t), using techniques analogous to Arora et al. (2019): Proposition 4.1. The singular values of the feature matrix Wφ(k, t) evolve according to: σ̇r(k, t) = −N · ( σ2r(k, t) )1− 1N−1 · 〈WN (k, t)T dLN,k+1(WK,t) dW ,ur(k, t)vr(k, t) T 〉 , (4) for r = 1, · · · ,minN−1i=1 di, where ur(k, t) and vr(k, t) denote the left and right singular vectors of the feature matrix, Wφ(k, t), respectively. 0 100 200 300 400 500 Gradient Updates 100 102 104 106 108 si ng ul ar v al ue s (l og s ca le ) Seaquest (Singular values) σmax σ2 σ3 σ10 σ100 Evolution of singular values of Wφ on SEAQUEST Solving the differential equation (4) indicates that larger singular values will evolve at a exponentially faster rate than smaller singular values (as we also formally show in Appendix D.1) and the difference in their magnitudes disproportionately increase with increasing t. This behavior also occurs empirically, illustrated in the figure on the right (also see Figure D.1), where larger singular values are orders of magnitude larger than smaller singular values. Hence, the effective rank, srankδ(Wφ(k, t)), will decrease with more gradient steps within a fitting iteration k. Abstract optimization problem for the low-rank solution. Building on Proposition 4.1, we note that the final solution obtained in a bootstrapping round (i.e., fitting iteration) can be equivalently expressed as the solution that minimizes a weighted sum of the TD error and a data-dependent implicit regularizer hD(Wφ,WN ) that encourages disproportionate singular values of Wφ, and hence, a low effective rank of Wφ. While the actual form for h is unknown, to facilitate our analysis of bootstrapping, we make a simplification and express this solution as the minimum of Equation 5. min Wφ,WN∈M ||WNWφ[s;a]− yk(s,a)||2 + λksrankδ(Wφ). (5) Note that the entire optimization path may not correspond to the objective in Equation 5, but the Equation 5 represents the final solution of a given fitting iteration. M denotes the set of constraints that WN obtained via gradient optimization of TD error must satisfy, however we do not need to explicitly quantifyM in our analysis. λk is a constant that denotes the strength of rank regularization. Since srankδ is always regularized, our analysis assumes that λk > 0 (see Appendix D.1). Rank drop within a fitting iteration “compounds” due to bootstrapping. In the RL setting, the target values are given by yk(s,a) = r(s,a) + γPπQk−1(s,a). First note that when r(s,a) = 0 and Pπ = I, i.e., when the bootstrapping update resembles self-regression, we first note that just “copying over weights” from iteration k− 1 to iteration k is a feasible point for solving Equation 5, which attains zero TD error with no increase in srankδ . A better solution to Equation 5 can thus be obtained by incurring non-zero TD error at the benefit of a decreased srank, indicating that in this setting, srankδ(Wφ) drops in each fitting iteration, leading to a compounding rank drop effect. We next extend this analysis to the full bootstrapping setting. Unlike the self-training setting, yk(s,a) is not directly expressible as a function of the previous Wφ(k, T ) due to additional reward and dynamics transformations. Assuming closure of the function class (Assumption D.1) under the Bellman update (Munos & Szepesvári, 2008; Chen & Jiang, 2019), we reason about the compounding effect of rank drop across iterations in Proposition 4.2 (proof in Appendix D.2). Specifically, srankδ can increase in each fitting iteration due to R and Pπ transformations, but will decrease due to low rank preference of gradient descent. This change in rank then compounds as shown below. Proposition 4.2. Assume that the Q-function is initialized to Wφ(0) and WN (0). Let the Q-function class be closed under the backup, i.e., ∃WPN ,WPφ , s.t. (R + γPπQk−1) T = WPN (k)W P φ (k)[S;A]T , and assume that the change in srank due to dynamics and reward transformations is bounded: srankδ(WPφ (k)) ≤ srankδ(Wφ(k − 1)) + ck. Then, srankδ(Wφ(k)) ≤ srankδ(Wφ(0)) + k∑ j=1 cj − k∑ j=1 ||Qj − yj || λj . Proposition 4.2 provides a bound on the value of srank after k rounds of bootstrapping. srank decreases in each iteration due to non-zero TD errors, but potentially increases due to reward and bootstrapping transformations. To instantiate a concrete case where rank clearly collapses, we investigate ck as the value function gets closer to the Bellman fixed point, which is a favourable initialization for the Q-function in Theorem 4.2. In this case, the learning dynamics begins to resemble the self-training regime, as the target values approach the previous value iterate yk ≈ Qk−1, and thus, as we show next, the potential increase in srank (ck in Proposition 4.2) converges to 0. Theorem 4.2. Suppose target values yk = R+γPπQk−1 are close to the previous value estimate Qk−1, i.e. ∀ s,a, yk(s,a) = Qk−1(s,a)+ε(s,a), with |ε(s,a)| |Qk−1(s,a)|. Then, there is a constant 0 depending upon WN and Wφ, such that for all ‖ε‖ < ε0, ck = 0. Thus, srank decreases in iteration k: srankδ(Wφ(k)) ≤ srankδ(Wφ(k − 1))− ||Qk − yk||/λk. We provide a complete form, including the expression for 0 and a proof in Appendix D.3. To empirically show the consequence of Theorem 4.2 that a decrease in srankδ(Wφ) values can lead to an increase in the distance to the fixed point in a neighborhood around the fixed point, we performed a controlled experiment on a deep linear net shown in Figure 5 that measures the relationship between of srankδ(Φ) and the error to the projected TD fixed point |Q −Q∗|. Note that a drop in srankδ(Φ) corresponds to a increased value of |Q−Q∗| indicating that rank drop when Q get close to a fixed point can affect convergence to it. 5 MITIGATING UNDER-PARAMETRIZATION IMPROVES DEEP Q-LEARNING We now show that mitigating implicit under-parameterization by preventing rank collapse can improve performance. We place special emphasis on the offline RL setting in this section, since it is particularly vulnerable to the adverse effects of rank collapse. We devise a penalty (or a regularizer) Lp(Φ) that encourages higher effective rank of the learned features, srankδ(Φ), to prevent rank collapse. The effective rank function srankδ(Φ) is non-differentiable, so we choose a simple surrogate that can be optimized over deep networks. Since effective rank is maximized when the magnitude of the singular values is roughly balanced, one way to increase effective rank is to minimize the largest singular value of Φ, σmax(Φ), while simultaneously maximizing the smallest singular value, σmin(Φ). We construct a simple penalty Lp(Φ) derived from this intuition, given by: Lp(Φ) = σ2max(Φ)− σ2min(Φ). (6) Lp(Φ) can be computed by invoking the singular value decomposition subroutines in standard automatic differentiation frameworks (Abadi et al., 2016; Paszke et al., 2019). We estimate the singular values over the feature matrix computed over a minibatch, and add the resulting value of Lp as a penalty to the TD error objective, with a tradeoff factor α = 0.001. Does Lp(Φ) address rank collapse? We first verify whether controlling the minimum and maximum singular values using Lp(Φ) actually prevents rank collapse. When using this penalty on the gridworld problem (Figure 6a), the effective rank does not collapse, instead gradually decreasing at the onset and then plateauing, akin to the evolution of effective rank in supervised learning. In Figure 6b, we plot the evolution of effective rank on two Atari games in the offline setting (all games in Appendix A.5), and observe that using Lp also generally leads to increasing rank values. Does mitigating rank collapse improve performance? We now evaluate the performance of the penalty using DQN (Mnih et al., 2015) and CQL (Kumar et al., 2020b) on Atari dataset from Agarwal et al. (2020) (5% replay data), used in Section 3. Figure 7 summarizes the relative improvement from using the penalty for 16 Atari games. Adding the penalty to DQN improves performance on all 16/16 games with a median improvement of 74.5%; adding it to CQL, a state-of-the-art offline algorithm, improves performance on 11/16 games with median improvement of 14.1%. Prior work has discussed that standard Q-learning methods designed for the online setting, such as DQN, are generally ineffective with small offline datasets (Kumar et al., 2020b; Agarwal et al., 2020). Our results show that mitigating rank collapse makes even such simple methods substantially more effective in this setting, suggesting that rank collapse and the resulting implicit under-parameterization may be an crucial piece of the puzzle in explaining the challenges of offline RL. Za xx on Ya rs Re ve ng e Po ng Sp ac eI nv ad er s Ro ad Ru nn er M sP ac m an As te rix Be am Ri de r Qb er t Ja m es bo nd En du ro W iza rd Of W or Ice Ho ck ey Se aq ue st Do ub le Du nk De m on At ta ck Game 0 100 101 102 103 % Im pr ov em en t ( Lo g sc al e) DQN DQN w/ Penalty Po ng Za xx on W iza rd Of W or Ya rs Re ve ng e Qb er t Ro ad Ru nn er M sP ac m an En du ro Ice Ho ck ey Ja m es bo nd Do ub le Du nk As te rix Sp ac eI nv ad er s Be am Ri de r Se aq ue st De m on At ta ck Game -102 -101 -100 0 100 101 102 103 % Im pr ov em en t ( Lo g sc al e) CQL CQL w/ Penalty Figure 7: DQN and CQL with Lp(Φ) penalty vs. their standard counterparts in the 5% offline setting on Atari from Section 3. Lp improves DQN on 16/16 and CQL on 11/16 games. We also evaluated the regularizer Lp(Φ) in the dataefficient online RL setting, with results in Appendix A.6. This variant achieved median improvement of 20.6% performance with Rainbow (Hessel et al., 2018), however performed poorly with DQN, where it reduced median performance by 11.5%. Thus, while our proposed penalty is effective in many cases in offline and online settings, it does not solve the problem fully, i.e., it does not address the root cause of implicit under-parameterization and only addresses a symptom, and a more sophisticated solution may better prevent the issues with implicit under-parameterization. Nevertheless, our results suggest that mitigation of implicit under-parameterization can improve performance of data-efficient RL. 6 RELATED WORK Prior work has extensively studied the learning dynamics of Q-learning with tabular and linear function approximation, to study error propagation (Munos, 2003; Farahmand et al., 2010) and to prevent divergence (De Farias, 2002; Maei et al., 2009; Sutton et al., 2009; Dai et al., 2018), as opposed to deep Q-learning analyzed in this work. Q-learning has been shown to have favorable optimization properties with certain classes of features (Ghosh & Bellemare, 2020), but our work shows that the features learned by a neural net when minimizing TD error do not enjoy such guarantees, and instead suffer from rank collapse. Recent theoretical analyses of deep Q-learning have shown convergence under restrictive assumptions (Yang et al., 2020; Cai et al., 2019; Zhang et al., 2020; Xu & Gu, 2019), but Theorem 4.2 shows that implicit under-parameterization appears when the estimates of the value function approach the optimum, potentially preventing convergence. Xu et al. (2005; 2007) present variants of LSTD (Boyan, 1999), which model the Q-function as a kernel-machine but do not take into account the regularization from gradient descent, as done in Equation 1, which is essential for implicit under-parameterization. Igl et al. (2020); Fedus et al. (2020a) argue that non-stationarity arising from distribution shift hinders generalization and recommend periodic network re-initialization. Under-parameterization is not caused by this distribution shift, and we find that network re-initialization does little to prevent rank collapse (Figure 4c). Luo et al. (2020) proposes a regularization similar to ours, but in a different setting, finding that more expressive features increases performance of on-policy RL methods. Finally, Yang et al. (2019) study the effective rank of the Q∗-values when expressed as a |S| × |A| matrix in online RL and find that low ranks for this Q∗-matrix are preferable. We analyze a fundamentally different object: the learned features (and illustrate that a rank-collapse of features can hurt), not the Q∗-matrix, whose rank is upper-bounded by the number of actions (e.g., 24 for Atari). 7 DISCUSSION We identified an implicit under-parameterization phenomenon in deep RL algorithms that use bootstrapping, where gradient-based optimization of a bootstrapped objective can lead to a reduction in the expressive power of the value network. This effect manifests as a collapse of the rank of the features learned by the value network, causing aliasing across states and often leading to poor performance. Our analysis reveals that this phenomenon is caused by the implicit regularization due to gradient descent on bootstrapped objectives. We observed that mitigating this problem by means of a simple regularization scheme improves performance of deep Q-learning methods. While our proposed regularization provides some improvement, devising better mitigation strategies for implicit under-parameterization remains an exciting direction for future work. Our method explicitly attempts to prevent rank collapse, but relies on the emergence of useful features solely through the bootstrapped signal. An alternative path may be to develop new auxiliary losses (e.g., Jaderberg et al., 2016) that learn useful features while passively preventing underparameterization. More broadly, understanding the effects of neural nets and associated factors such as initialization, choice of optimizer, etc. on the learning dynamics of deep RL algorithms, using tools from deep learning theory, is likely to be key towards developing robust and data-efficient deep RL algorithms. ACKNOWLEDGEMENTS We thank Lihong Li, Aaron Courville, Aurick Zhou, Abhishek Gupta, George Tucker, Ofir Nachum, Wesley Chung, Emmanuel Bengio, Zafarali Ahmed, and Jacob Buckman for feedback on an earlier version of this paper. We thank Hossein Mobahi for insightful discussions about self-distillation and Hanie Sedghi for insightful discussions about implicit regularization and generalization in deep networks. We additionally thank Michael Janner, Aaron Courville, Dale Schuurmans and Marc Bellemare for helpful discussions. AK was partly funded by the DARPA Assured Autonomy program, and DG was supported by a NSF graduate fellowship and compute support from Amazon. Appendices A ADDITIONAL EVIDENCE FOR IMPLICIT UNDER-PARAMETERIZATION In this section, we present additional evidence that demonstrates the existence of the implicit underparameterization phenomenon from Section 3. In all cases, we plot the values of srankδ(Φ) computed on a batch size of 2048 i.i.d. sampled transitions from the dataset. DQN (4x data) A.1 OFFLINE RL A.2 DATA EFFICIENT ONLINE RL In the data-efficient online RL setting, we verify the presence of implicit under-parameterization on both DQN and Rainbow (Hessel et al., 2018) algorithms when larger number of gradient updates are made per environment step. In these settings we find that more gradient updates per environment step lead to a larger decrease in effective rank, whereas effective rank can increase when the amount of data re-use is reduced by taking fewer gradient steps. A.3 DOES BOOTSTRAPPING CAUSE IMPLICIT UNDER-PARAMETERIZATION? In this section, we provide additional evidence to support our claim from Section 3 that suggests that bootstrapping-based updates are a key component behind the existence of implicit underparameterization. To do so, we empirically demonstrate the following points empirically: MC returns • For the final point in this section, we verify if the non-stationarity of the policy in the Qlearning (control) setting, i.e., when the Bellman optimality operator is used is not a reason behind the emergence of implicit under-parameterization. The non-stationary policy in a control setting causes the targets to change and, as a consequence, leads to non-zero errors. However, rank drop is primarily caused by bootstrapping rather than non-stationarity of the control objective. To illustrate this, we ran an experiment in the control setting on Gridworld, regressing to the target computed using the true value function Qπ for the current policy π (computed using tabular Q-evaluation) instead of using the bootstrap TD estimate. The results, shown in figure A.11a, show that the srankδ doesn’t decrease significantly when regressing to true control values and infact increases with more iterations as compared to Figure 6a where rank drops with bootstrapping. This experiment, alongside with experiments discussed above, ablating bootstrapping in the stationary policy evaluation setting shows that rank-deficiency is caused due to bootstrapping. A.4 HOW DOES IMPLICIT REGULARIZATION INHIBIT DATA-EFFICIENT RL? Implicit under-parameterization leads to a trade-off between minimizing the TD error vs. encouraging low rank features as shown in Figure 4b. This trade-off often results in decrease in effective rank, at the expense of increase in TD error, resulting in lower performance. Here we present additional evidence to support this. Figure A.11b shows a gridworld problem with one-hot features, which naturally leads to reduced state-aliasing. In this setting, we find that the amount of rank drop with respect to the supervised projection of oracle computed Q∗ values is quite small and the regression error to Q∗ actually decreases unlike the case in Figure 4a, where it remains same or even increases. The method is able to learn policies that attain good performance as well. Hence, this justifies that when there’s very little rank drop, for example, 5 rank units in the example on the right, FQI methods are generally able to learn Φ that is able to fit Q∗. This provides evidence showing that typical Q-networks learn Φ that can fit the optimal Q-function when rank collapse does not occur. In Atari, we do not have access to Q∗, and so we instead measure the error in fitting the target value estimates, R + γPπQk. As rank decreases, the TD error increases (Figure A.12) and the value function is unable to fit the target values, culminating in a performance plateau (Figure A.6). A.5 TRENDS IN VALUES OF EFFECTIVE RANK WITH PENALTY. In this section, we present the trend in the values of the effective rank when the penalty Lp(Φ) is added. In each plot below, we present the value of srankδ(Φ) with and without penalty respectively. A.5.1 OFFLINE RL: DQN A.5.2 OFFLINE RL: CQL WITH Lp(Φ) PENALTY A.6 DATA-EFFICIENT ONLINE RL: RAINBOW A.6.1 RAINBOW WITH Lp(Φ) PENALTY: RANK PLOTS A.6.2 RAINBOW WITH Lp(Φ) PENALTY: PERFORMANCE In this section, we present additional results for supporting the hypothesis that preventing rank-collapse leads to better performance. In the first set of experiments, we apply the proposed Lp penalty to Rainbow in the data-efficient online RL setting (n = 4). In the second set of experiments, we present evidence for prevention of rank collapse by comparing rank values for different runs. As we will show in the next section, the state-of-the-art Rainbow (Hessel et al., 2018) algorithm also suffers form rank collapse in the data-efficient online RL setting when more updates are performed per gradient step. In this section, we applied our penalty Lp to Rainbow with n = 4, and obtained a median 20.66% improvement on top of the base method. This result is summarized below. A.7 RELAXING THE NORMALITY ASSUMPTION IN THEOREM 4.1 We can relax the normality assumption on S in Theorem 4.1. An analogous statement holds for non-normal matrices S for a slightly different notion of effective rank, denoted as srankδ,λ(Mk), that utilizes eigenvalue norms instead of singular values. Formally, let λ1(Mk), · · · , λ2(Mk), · · · be the (complex) eigenvalues of Mk arranged in decreasing order of their norms, i.e., , |λ1(Mk)| ≥ |λ2(Mk)| ≥ · · · , then, srankδ,λ(Mk) = min { k : ∑k i=1 |λi(Mk)|∑d i=1 |λi(Mk)| ≥ 1− δ } . A statement essentially analogous to Theorem 4.1 suggests that in this general case, srankδ,λ(Mk) decreases for all (complex) diagonalizable matrices S, which is the set of almost all matrices of size dim(S). Now, if S is approximately normal, i.e., when |σi(S)− |λi(S)|| is small, then the result in Theorem 4.1 also holds approximately as we discuss at the end of Appendix C. We now provide empirical evidence showing that the trend in the values of effective rank computed using singular values, srankδ(Φ) is almost identical to the trend in the effective rank computed using normalized eigenvalues, srankδ,λ(Φ). Since eigenvalues are only defined for a square matrix Φ, in practice, we use a batch of d = dim(φ(s,a)) state-action pairs for computing the eigenvalue rank and compare to the corresponding singular value rank in Figures A.20 and A.21. Connection to Theorem 4.1. We computed the effective rank of Φ instead of S, since S is a theoretical abstraction that cannot be computed in practice as it depends on the Green’s kernel (Duffy, 2015) obtained by assuming that the neural network behaves as a kernel regressor. Instead, we compare the different notions of ranks of Φ since Φ is the practical counterpart for the matrix, S, when using neural networks (as also indicated by the analysis in Section 4.2). In fact, on the gridworld (Figure A.21), we experiment with a feature Φ with dimension equal to the number of state-action pairs, i.e., dim(φ(s,a)) = |S||A|, with the same number of parameters as a kernel parameterization of the Q-function: Q(s,a) = ∑ s′,a′ w(s ′,a′)k(s,a, s′,a′). This can also be considered as performing gradient descent on a “wide” linear network , and we measure the feature rank while observing similar rank trends. Since we do not require the assumption that S is normal in Theorem 4.1 to obtain a decreasing trend in srankδ,λ(Φ), and we find that in practical scenarios (Figures A.20 and A.21), srankδ(Φ) ≈ srankδ,λ(Φ) with an extremely similar qualitative trend we believe that Theorem 4.1 still explains the rank-collapse practically observed in deep Q-learning and is not vacuous. A.8 NORMALIZED PLOTS FOR FIGURE 3/ FIGURE A.6 In this section, we provide a set of normalized srank and performance trends for Atari games (the corresponding unnormalized plots are found in Figure A.6). In these plots, each unit on the x-axis is equivalent to one gradient update, and so since n = 8 prescribes 8× many updates as compared to n = 1, it it runs for 8× as long as n = 1. These plots are in Figure A.22. Note that the trend that effective rank decreases with larger n values also persists when rescaling the x-axis to account for the number of gradient steps, in all but one game. This is expected since it tells us that performing bootstrapping based updates in the data-efficient setting (larger n values) still leads to more aggressive rank drop as updates are being performed on a relatively more static dataset for larger values of n. B HYPERPARAMETERS & EXPERIMENT DETAILS B.1 ATARI EXPERIMENTS We follow the experiment protocol from Agarwal et al. (2020) for all our experiments including hyperparameters and agent architectures provided in Dopamine and report them for completeness and ease of reproducibility in Table B.1. We only use hyperparameter selection over the regularization experiment αp based on results from 5 Atari games (Asterix, Seaquest, Pong, Breakout and Seaquest). We will also open source our code to further aid in reproducing our results. Evaluation Protocol. Following Agarwal et al. (2020), the Atari environments used in our experiments are stochastic due to sticky actions, i.e., there is 25% chance at every time step that the environment will execute the agent’s previous action again, instead of the agent’s new action. All agents (online or offline) are compared using the best evaluation score (averaged over 5 runs) achieved during training where the evaluation is done online every training iteration using a -greedy policy with = 0.001. We report offline training results with same hyperparameters over 5 random seeds of the DQN replay data collection, game simulator and network initialization. Offline Dataset. As suggested by Agarwal et al. (2020), we randomly subsample the DQN Replay dataset containing 50 millions transitions to create smaller offline datasets with the same data distribution as the original dataset. We use the 5% DQN replay dataset for most of our experiments. We also report results using the 20% dataset setting (4x larger) to show that our claims are also valid even when we have higher coverage over the state space. Optimizer related hyperparameters. For existing off-policy agents, step size and optimizer were taken as published. We used the DQN (Adam) algorithm for all our experiments, given its superior performance over the DQN (Nature) which uses RMSProp, as reported by Agarwal et al. (2020). Atari 2600 games used. For all our experiments in Section 3, we used the same set of 5 games as utilized by Agarwal et al. (2020); Bellemare et al. (2017) to present analytical results. For our empirical evaluation in Appendix A.5, we use the set of games employed by Fedus et al. (2020b) which are deemed suitable for offline RL by Gulcehre et al. (2020). Similar in spirit to Gulcehre et al. (2020), we use the set of 5 games used for analysis for hyperparameter tuning for offline RL methods. 5 games subset: ASTERIX, QBERT, PONG, SEAQUEST, BREAKOUT 16 game subset: In addition to 5 games above, the following 11 games: DOUBLE DUNK, JAMES BOND, MS. PACMAN, SPACE INVADERS, ZAXXON, WIZARD OF WOR, YARS’ REVENGE, ENDURO, ROAD RUNNER, BEAMRIDER, DEMON ATTACK B.2 GRIDWORLD EXPERIMENTS We use the gridworld suite from Fu et al. (2019) to obtain gridworlds for our experiments. All of our gridworld results are computed using the 16 × 16 GRID16SMOOTHOBS environment, which consists of a 256-cell grid, with walls arising randomly with a probability of 0.2. Each state allows 5 different actions (subject to hitting the boundary of the grid): move left, move right, move up, move down and no op. The goal in this environment is to minimize the cumulative discounted distance to a fixed goal location where the discount factor is given by γ = 0.95. The features for this Q-function are given by randomly chosen vectors which are smoothened spatially in a local neighborhood of a grid cell (x, y). We use a deep Q-network with two hidden layers of size (64, 64), and train it using soft Q-learning with entropy coefficient of 0.1, following the code provided by authors of Fu et al. (2019). We use a first-in-first out replay buffer of size 10000 to store past transitions. C PROOFS FOR SECTION 4.1 In this section, we provide the technical proofs from Section 4.1. We first derive a solution to optimization problem Equation 1 and show that it indeed comes out to have the form described in Equation 2. We first introduce some notation, including definition of the kernel G which was used for this proof. This proof closely follows the proof from Mobahi et al. (2020). Definitions. For any universal kernel u, the Green’s function (Duffy, 2015) of the linear kernel operator L given by: [LQ] (s,a) := ∑ (s′,a′) u((s,a), (s ′,a′))Q(s′,a′) is given by the function g((s,a), (s′,a′)) that satisfies:∑ (s,a) u((s,a), (s′,a′)) g((s′,a′), (s̄, ā)) = δ((s,a)− (s̄, ā)), (C.1) where δ is the Dirac-delta function. Thus, Green’s function can be understood as a kernel that “inverts” the universal kernel u to the identity (Dirac-delta) matrix. We can then define the matrix G as the matrix of vectors g(s,a) evaluated on the training dataset, D, however note that the functional g(s,a) can be evaluated for other state-action tuples, not present in D. G((si,ai), (sj ,aj)) := g((si,ai), (sj ,aj)) and g(s,a)[i] = g((s,a), (si,ai)) ∀(si,ai) ∈ D. (C.2) Lemma C.0.1. The solution to Equation 1 is given by Equation 2. Proof. This proof closely follows the proof of Proposition 1 from (Mobahi et al., 2020). We revisit key aspects the key parts of this proof here. We restate the optimization problem below, and solve for the optimum Qk to this equation by applying the functional derivative principle. min Q∈Q J(Q) := ∑ si,ai∈D (Q(si,ai)− yk(si,ai))2 + c ∑ (s,a) ∑ (s′,a′) u((s,a), (s′,a′))Q(s,a)Q(s′,a′). The functional derivative principle would say that the optimal Qk to this problem would satisfy, for any other function f and for a small enough ε→ 0, ∀f ∈ Q : ∂J(Qk + εf) ∂ε ∣∣∣ ε=0 = 0. (C.3) By setting the gradient of the above expression to 0, we obtain the following stationarity conditions on Qk (also denoting (si,ai) := xi) for brevity:∑ xi∈D δ(x− xi) (Qk(xi)− yk(xi)) + c ∑ x u(x,x′)Qk(x ′) = 0. (C.4) Now, we invoke the definition of the Green’s function discussed above and utilize the fact that the Dirac-delta function can be expressed in terms of the Green’s function, we obtain a simplified version of the above relation:∑ x u(x,x′) ∑ xi∈D (Qk(xi)− yk(xi))g(x′,xi) = −c ∑ x u(x,x′)Qk(x ′). (C.5) Since the kernel u(x,x′) is universal and positive definite, the optimal solution Qk(x) is given by: Qk(s,a) = − 1 c ∑ (si,ai)∈D (Qk(si,ai)− yk(si,ai)) · g((s,a), (si,ai)). (C.6) Finally we can replace the expression for residual error, Qk(si,ai) − yk(si,ai) using the green’s kernel on the training data by solving for it in closed form, which gives us the solution in Equation 2. Qk(s,a) = − 1 c gT(s,a)(Qk − yk) = g T (s,a)(cI + G) −1yk. (C.7) Next, we now state and prove a slightly stronger version of Theorem 4.1 that immediately implies the original theorem. Theorem C.1. Let S be a shorthand for S = γPπA and assume S is a normal matrix. Then there exists an infinite, strictly increasing sequence of fitting iterations, (kl)∞l=1 starting from k1 = 0, such that, for any two singular-values σi(S) and σj(S) of S with σi(S) ≤ σj(S), ∀ l ∈ N and l′ ≥ l, σi(Mkl′ ) σj(Mkl′ ) < σi(Mkl) σj(Mkl) ≤ σi(S) σj(S) . (C.8) Therefore, the effective rank of Mk satisfies: srankδ(Mkl′ ) ≤ srankδ(Mkl). Furthermore, ∀ l ∈ N and t ≥ kl, σi(Mt) σj(Mt) < σi(Mkl) σj(Mkl) +O (( σi(S) σj(S) )kl) . (C.9) Therefore, the effective rank of Mt, srankδ(Mt), outside the chosen subsequence is also controlled above by the effective rank on the subsequence (srankδ(Mkl)) ∞ l=1. To prove this theorem, we first show that for any two fitting iterations, t < t′, if St and St ′ are positive semi-definite, the ratio of singular values and the effective rank decreases from t to t′. As an immediate consequence, this shows that when S is positive semi-definite, the effective rank decreases at every iteration, i.e., by setting kl = l (Corollary C.1.1). To extend the proof to arbitrary normal matrices, we show that for any S, a sequence of fitting iterations (kl)∞l=1 can be chosen such that S kl is (approximately) positive semi-definite. For this subsequence of fitting iterations, the ratio of singular values and effective rank also decreases. Finally, to control the ratio and effective rank on fitting iterations t outside this subsequence, we construct an upper bound on the ratio f(t): σi(Mt)σj(Mt) < f(t), and relate this bound to the ratio of singular values on the chosen subsequence. Lemma C.1.1 (srankδ(Mk) decreases when Sk is PSD.). Let S be a shorthand for S = γPπA and assume S is a normal matrix. Choose any t, t′ ∈ N such that t < t′. If St and St′ are positive semi-definite, then for any two singular-values σi(S) and σj(S) of S, such that 0 < σi(S) < σj(S): σi(Mt′) σj(Mt′) < σi(Mt) σj(Mt) ≤ σi(S) σj(S) . (C.10) Hence, the effective rank of Mk decreases from t to t′: srankδ(Mt′) ≤ srankδ(Mt). Proof. First note that Mk is given by: Mk := k∑ i=1 γk−i(PπA)k−i = k∑ i=1 Sk−i. (C.11) From hereon, we omit the leading γ term since it is a constant scaling factor that does not affect ratio or effective rank. Almost every matrix S admits a complex orthogonal eigendecomposition. Thus, we can write S := Uλ(S)UH . And any power of S, i.e., , Si can be expressed as: Si = Uλ(S)iUH , and hence, we can express Mk as: Mk := U ( k−1∑ i=0 λ(S)i ) UH = U · diag ( 1− λ(S)k 1− λ(S) ) · UH . (C.12) Since S is normal, its eigenvalues and singular values are further related as σk(S) = |λk(S)|. And this also means that Mk is normal, indicating that σi(Mk) = |λi(Mk)|. Thus, the singular values of Mk can be expressed as σi(Mk) := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.13) When Sk is positive semi-definite, λi(S)k = σi(S)k, enabling the following simplification: σi(Mk) = |1− σi(S)k| |1− λi(S)| . (C.14) To show that the ratio of singular values decreases from t to t′, we need to show that f(σ) = |1−σ t′ | |1−σt| is an increasing function of σ when t′ > t. It can be seen that this is the case, which implies the desired result. To further show that srankδ(Mt) ≥ srankδ(Mt′), we can simply show that ∀i ∈ [1, · · · , n], hk(i) := ∑i j=1 σj(Mk)∑n j=1 σj(Mk) increases with k, and this would imply that the srankδ(Mk) cannot increase from k = t to k = t′. We can decompose hk(i) as: hk(i) = i∑ j=1 σj(Mk)∑ l σl(Mk) = 1 1 + ∑n j=i+1 σj(Mk)∑i j=1 σj(Mk) . (C.15) Since σj(Mk)/σl(Mi) decreases over time k for all j, l if σj(S) ≤ σl(S), the ratio in the denominator of hk(i) decreases with increasing k implying that hk(i) increases from t to t′. Corollary C.1.1 (srankδ(Mk) decreases for PSD S matrices.). Let S be a shorthand for S = γPπA. Assuming that S is positive semi-definite, for any k, t ∈ N, such that t > k and that for any two singular-values σi(S) and σj(S) of S, such that σi(S) < σj(S), σi(Mt) σj(Mt) < σi(Mk) σj(Mk) ≤ σi(S) σj(S) . (C.16) Hence, the effective rank of Mk decreases with more fitting iterations: srankδ(Mt) ≤ srankδ(Mk). In order to now extend the result to arbitrary normal matrices, we must construct a subsequence of fitting iterations (kl)∞l=1 where S kl is (approximately) positive semi-definite. To do so, we first prove a technical lemma that shows that rational numbers, i.e., numbers that can be expressed as r = pq , for integers p, q ∈ Z are “dense” in the space of real numbers. Lemma C.1.2 (Rational numbers are dense in the real space.). For any real number α, there exist infinitely many rational numbers pq such that α can be approximated by p q upto 1 q2 accuracy.∣∣∣∣α− pq ∣∣∣∣ ≤ 1q2 . (C.17) Proof. We first use Dirichlet’s approximation theorem (see Hlawka et al. (1991) for a proof of this result using a pigeonhole argument and extensions) to obtain that for any real numbers α andN ≥ 1, there exist integers p and q such that 1 ≤ q ≤ N and, |qα− p| ≤ 1 |N |+ 1 < 1 N . (C.18) Now, since q ≥ 1 > 0, we can divide both sides by q, to obtain:∣∣∣∣α− pq ∣∣∣∣ ≤ 1Nq ≤ 1q2 . (C.19) To obtain infinitely many choices for pq , we observe that Dirichlet’s lemma is valid only for all values of N that satisfy N ≤ 1|qα−p| . Thus if we choose an N ′ such that N ′ ≥ Nmax where Nmax is defined as: Nmax = max { 1 |q′α− p′| ∣∣∣ p′, q′ ∈ Z, 1 ≤ q′ ≤ q} . (C.20) Equation C.20 essentially finds a new value of N , such that the current choices of p and q, which were valid for the first value ofN do not satisfy the approximation error bound. Applying Dirichlet’s lemma to this new value of N ′ hence gives us a new set of p′ and q′ which satisfy the 1q′2 approximation error bound. Repeating this process gives us countably many choices of (p, q) pairs that satisfy the approximation error bound. As a result, rational numbers are dense in the space of real numbers, since for any arbitrarily chosen approximation accuracy given by 1q2 , we can obtain atleast one rational number, pq which is closer to α than 1 q2 . This proof is based on Johnson (2016). Now we utilize Lemmas C.1.1 and C.1.2 to prove Proposition 4.1. Proof of Proposition 4.1 and Theorem C.1 Recall from the proof of Lemma C.1.1 that the singular values of Mk are given by: σi(Mk) := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.21) Bound on Singular Value Ratio: The ratio between σi(Mk) and σj(Mk) can be expressed as σi(Mk) σj(Mk) = ∣∣∣∣ 1− λi(S)k1− λj(S)k ∣∣∣∣ ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ . (C.22) For a normal matrix S, σi(S) = |λi(S)|, so this ratio can be bounded above as σi(Mk) σj(Mk) ≤ 1 + σi(S) k |1− σj(S)k| ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ . (C.23) Defining f(k) to be the right hand side of the equation, we can verify that f is a monotonically decreasing function in k when σi < σj . This shows that this ratio of singular values in bounded above and in general, must decrease towards some limit limk→∞ f(k). Construction of Subsequence: We now show that there exists a subsequence (kl)∞l=1 for which Skl is approximately positive semi-definite. For ease of notation, let’s represent the i-th eigenvalue as λi(S) = |λi(S)| · eiθi , where θi > 0 is the polar angle of the complex value λi(s) and |λi(S)| is its magnitude (norm). Now, using Lemma C.1.2, we can approximate any polar angle, θi using a rational approximation, i.e., , we apply lemma C.1.2 on θi2π ∃ pi, qi ∈ N, s.t. ∣∣∣∣ θi2π − piqi ∣∣∣∣ ≤ 1q2i . (C.24) Since the choice of qi is within our control we can estimate θi for all eigenvalues λi(S) to infinitesimal accuracy. Hence, we can approximate θi ≈ 2π piqi . We will now use this approximation to construct an infinite sequence (kl)∞l=1, shown below: kl = l · LCM(q1, · · · , qn) ∀ j ∈ N, (C.25) where LCM is the least-common-multiple of natural numbers q1, · · · qn. In the absence of any approximation error in θi, we note that for any i and for any l ∈ N as defined above, λi(S)kl = |λi(S)|kl · exp ( 2iπ · piqi · kl ) = |λi(S)|kl , since the polar angle for any kl is going to be a multiple of 2π, and hence it would fall on the real line. As a result, Skl will be positive semi-definite, since all eigenvalues are positive and real. Now by using the proof for Lemma C.1.1, we obtain the ratio of i and j singular values are increasing over the sequence of iterations (kj)∞j=1. Since the approximation error in θi can be controlled to be infinitesimally small to prevent any increase in the value of srankδ due to it (this can be done given the discrete form of srankδ), we note that the above argument applies even with the approximation, proving the required result on the subsequence. Controlling All Fitting Iterations using Subsequence: We now relate the ratio of singular values within this chosen subsequence to the ratio of singular values elsewhere. Choose t, l ∈ N such that t > kl. Earlier in this proof, we showed that the ratio between singular values is bounded above by a monotonically decreasing function f(t), so σi(Mt) σj(Mt) ≤ f(t) < f(kl). (C.26) Now, we show that that f(kl) is in fact very close to the ratio of singular values: f(kl) = |1− σi(S)kl | |1− σj(S)kl | ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ ≤ σi(Mt)σj(Mt) + 2σi(S) kl |1− σj(S)kl | ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣. (C.27) The second term goes to zero as kl increases; algebraic manipulation shows that this gap be bounded by f(kl) ≤ σi(Mkl) σj(Mkl) + ( σi(S) σj(S) )kl 2σj(S) |1− σj(S)| ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣︸ ︷︷ ︸ constant . (C.28) Putting these inequalities together proves the final statement, σi(Mt) σj(Mt) ≤ σi(Mkl) σj(Mkl) +O (( σi(S) σj(S) )kl) . (C.29) Extension to approximately-normal S. We can extend the result in Theorem C.1 (and hence also Theorem 4.1) to approximately-normal S. Note that the main requirement for normality of S (i.e., σi(S) = |λi(s)|) is because it is straightforward to relate the eigenvalue of S to M as shown below. |λi(Mk)| := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.30) Now, since the matrix S is approximately normal, we can express it using its Schur’s triangular form as, S = U · (Λ + N) ·UH , where Λ is a diagonal matrix and N is an “offset” matrix. The departure from normality of S is defined as: ∆(S) := infN ||N||2, where the infimum is computed over all matrices N that can appear in the Schur triangular form for S. For a normal S only a single value of N = 0 satisfies the Schur’s triangular form. For an approximately normal matrix S, ||N||2 ≤ ∆(S) ≤ ε, for a small ε. Furthermore note that from Equation 6 in Ruhe (1975), we obtain that |σi(S)− |λi(S)|| ≤ ∆(S) ≤ ε, (C.31) implying that singular values and norm-eigenvalues are close to each other for S. Next, let us evaluate the departure from normality of Mk. First note that, Sj = U · (Λ +N)j ·UH , and so, Mk = U · (∑k j=1(Λ + N) j ) ·UH and if ||N||2 ≤ ε, for a small epsilon (i.e., considering only terms that are linear in N for (Λ + N)j), we note that: |σi(Mk)− |λi(Mk)|| ≤ k∑ j=1 j · |λ1(S)|j−1∆(S) ≤ 1 (1− |λ1(S)|)2 · ε. (C.32) Thus, the matrix Mk is also approximately normal provided that the max eigenvalue norm of S is less than 1. This is true, since S = γPπA (see Theorem 4.1, where both Pπ and A have eigenvalues less than 1, and γ < 1. Given that we have shown that Mk is approximately normal, we can show that srankδ(Mk) only differs from srankδ,λ(Mk), i.e., , the effective rank of eigenvalues, in a bounded amount. If the value of ε is then small enough, we still retain the conclusion that srankδ(Mk) generally decreases with more training by following the proof of Theorem C.1. D PROOFS FOR SECTION 4.2 In this section, we provide technical proofs from Section 4.2. We start by deriving properties of optimization trajectories of the weight matrices of the deep linear network similar to Arora et al. (2018) but customized to our set of assumptions, then prove Proposition 4.1, and finally discuss how to extend these results to the fitted Q-iteration setting and some extensions not discussed in the main paper. Similar to Section 4.1, we assume access to a dataset of transitions, D = {(si,ai, r(si,ai), s′i} in this section, and assume that the same data is used to re-train the function. Notation and Definitions. The Q-function is represented using a deep linear network with at least 3 layers, such that Q(s,a) = WNWN−1 · · ·W1[s;a], where N ≥ 3,WN ∈ R1×dN−1 , (D.1) and Wi ∈ Rdi×di−1 for i = 1, . . . , N − 1. We index the weight matrices by a tuple (k, t): Wj(k, t) denotes the weight matrix Wj at the t-th step of gradient descent during the k-th fitting iteration (Algorithm 1). Let the end-to-end weight matrix WNWN−1 · · ·W1 be denoted shorthand as WN :1, and let the features of the penultimate layer of the network, be denoted as Wφ(k, t) := WN−1(k, t) · · ·W1(k, t). Wφ(k, t) is the matrix that maps an input [s;a] to corresponding features Φ(s,a). In our analysis, it is sufficient to consider the effective rank of Wφ(k, t) since the features Φ are given by: Φ(k, t) = Wφ(k, t)[S;A], which indicates that: rank(Φ(k, t)) = rank(Wφ(k, t)[S;A]) ≤ min (rank(Wφ(k, t)), rank([S;A])) . Assuming the state-action space has full rank, we are only concerned about rank(Wφ(k, t)) which justifies our choice for analyzing srankδ(Wφ(k, t)). Let Lk+1(WN :1(k, t)) denote the mean squared Bellman error optimization objective in the k-th fitting iteration. Lk+1(WN :1(k, t)) = |D|∑ i=1 (WN (k, t)Wφ(k, t)[si;ai]− yk(si,ai))2 , where yk = R + γPπQk. When gradient descent is used to update the weight matrix, the updates to Wi(k, t) are given by: Wj(k, t+ 1)←Wj(k, t)− η ∂Lk+1(WN :1(k, t)) ∂Wj(k, t) . If the learning rate η is small, we can approximate this discrete time process with a continuous-time differential equation, which we will use for our analysis. We use Ẇ (k, t) to denote the derivative of W (k, t) with respect to t, for a given k. Ẇj(k, t) = −η ∂Lk+1(WN :1(k, t)) ∂Wj(k, t) (D.2) In order to quantify the evolution of singular values of the weight matrix, Wφ(k, t), we start by quantifying the evolution of the weight matrix Wφ(k, t) using a more interpretable differential equation. In order to do so, we make an assumption similar to but not identical as Arora et al. (
1. What are the main contributions of the paper regarding feature rank collapse in RL algorithms? 2. What are the strengths of the paper's theoretical analysis using Neural Tangent Kernel framework? 3. Do you have any questions or issues with the assumptions made in the paper, such as the assumption of S being a normal matrix in theorem 4.1? 4. Can the theoretical analysis be extended to policy training settings? 5. Can you provide more information about the red trajectory in Figure 3 (d) that does not experience rank collapse? 6. Can you clarify the assumption regarding singular values (a.k.a. "balancedness") in the paper? 7. Are there any typos or errors in the equations, such as the absence of lambda in equation (C.12)? 8. Can you explain the meaning of 0 in dL_0(W_{N:1})/dW_{N:1}? 9. Why do the authors claim that Rainbow performance increased, while DQN performance decreased in online settings?
Review
Review The main contributions of the paper are the following ones: Identifying feature rank collapse problem in RL algorithms using bootstrapping and gradient descent optimization for value function estimation and pinning down this problem to these two factors. Theoretical analysis of rank collapse based on Neural Tangent Kernel framework and ideas from analysis of continuous-time differential equations. In particular, the authors showed that rank collapses near optimal point when fitting resembles self-distillation. The regularization term heuristic to prevent rank collapse. Overall, the paper contains a very extensive experimental part, theoretical part and very-well motivated idea. However, the authors tried to put too much information into one paper, therefore sometimes it is difficult to follow. For example, Preposition 4.1 is difficult to follow, since a lot of interesting and important details are hidden in Appendix. Some questions and issues. There is an assumption that S is a normal matrix in theorem 4.1. How restrictive is this assumption? Is it possible to extend theoretical analysis from policy evaluation settings to policy training settings. i.e. when Bellman optimality operator is used instead of Bellman operator? I would recommend adding a title to Figure 3 (a). Figure 3 (d) contains one red trajectory, for which srank does not collapse. Could you please comment what special properties this trajectory has that srank stays almost the same? "Similar to Arora et al. (2018; 2019), we assume that all except the last-layer weights share singular values (a.k.a. “balancedness”)." According to Appendix the stronger assumption W_jW_j^T = W_{j+1}^TW_{j+1} is required. I assume lambda is missing in equation (C.12). The question regarding the equation between D.4 and D.5. I understand how the derivative was computed, but I am not sure that I understand what 0 means in dL_0(W_{N:1})/dW_{N:1}. I am a bit puzzled by the fact that Rainbow performance increased, while DQN performance decreased in online settings. What is the key underlying component that leads to different results?
ICLR
Title Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning Abstract We identify an implicit under-parameterization phenomenon in value-based deep RL methods that use bootstrapping: when value functions, approximated using deep neural networks, are trained with gradient descent using iterated regression onto target values generated by previous instances of the value network, more gradient updates decrease the expressivity of the current value network. We characterize this loss of expressivity via a drop in the rank of the learned value network features, and show that this typically corresponds to a performance drop. We demonstrate this phenomenon on Atari and Gym benchmarks, in both offline and online RL settings. We formally analyze this phenomenon and show that it results from a pathological interaction between bootstrapping and gradient-based optimization. We further show that mitigating implicit under-parameterization by controlling rank collapse can improve performance. 1 INTRODUCTION Many pervasive deep reinforcement learning (RL) algorithms estimate value functions using bootstrapping, that is, by sequentially fitting value functions to target value estimates generated from the value function learned in the previous iteration. Despite high-profile achievements (Silver et al., 2017), these algorithms are highly unreliable due to poorly understood optimization issues. Although a number of hypotheses have been proposed to explain these issues (Achiam et al., 2019; Bengio et al., 2020; Fu et al., 2019; Igl et al., 2020; Liu et al., 2018; Kumar et al., 2020a), a complete understanding remains elusive. We identify an “implicit under-parameterization” phenomenon that emerges when value networks are trained using gradient descent combined with bootstrapping. This phenomenon manifests as an excessive aliasing of features learned by the value network across states, which is exacerbated with more gradient updates. While the supervised deep learning literature suggests that some feature aliasing is desirable for generalization (e.g., Gunasekar et al., 2017; Arora et al., 2019), implicit under-parameterization exhibits more pronounced aliasing than in supervised learning. This over-aliasing causes an otherwise expressive value network to implicitly behave as an under-parameterized network, often resulting in poor performance. Implicit under-parameterization becomes aggravated when the rate of data re-use is increased, restricting the sample efficiency of deep RL methods. In online RL, increasing the number of gradient steps in between data collection steps for data-efficient RL (Fu et al., 2019; Fedus et al., 2020b) causes the problem to emerge more frequently. In the extreme case when no additional data is collected, referred to as offline RL (Lange et al., 2012; Agarwal et al., 2020; Levine et al., 2020), implicit under-parameterization manifests consistently, limiting the viability of offline methods. We demonstrate the existence of implicit under-parameterization in common value-based deep RL methods, including Q-learning (Mnih et al., 2015; Hessel et al., 2018) and actor-critic (Haarnoja et al., 2018), as well as neural fitted-Q iteration (Riedmiller, 2005; Ernst et al., 2005). To isolate the issue, we study the effective rank of the features in the penultimate layer of the value network (Section 3). We observe that after an initial learning period, the rank of the learned features drops steeply. As the rank decreases, the ability of the features to fit subsequent target values and the optimal value function generally deteriorates and results in a sharp decrease in performance (Section 3.1). ∗Equal Contribution. Correspondence to Aviral Kumar < [email protected] > and Rishabh Agarwal < [email protected] >. To better understand the emergence of implicit under-parameterization, we formally study the dynamics of Q-learning under two distinct models of neural net behavior (Section 4): kernel regression (Jacot et al., 2018; Mobahi et al., 2020) and deep linear networks (Arora et al., 2018). We corroborate the existence of this phenomenon in both models, and show that implicit underparameterization stems from a pathological interaction between bootstrapping and the implicit regularization of gradient descent. Since value networks are trained to regress towards targets generated by a previous version of the same model, this leads to a sequence of value networks of potentially decreasing expressivity, which can result in degenerate behavior and a drop in performance. The main contribution of this work is the identification of implicit under-parameterization in deep RL methods that use bootstrapping. Empirically, we demonstrate a collapse in the rank of the learned features during training, and show it typically corresponds to a drop in performance in the Atari (Bellemare et al., 2013) and continuous control Gym (Brockman et al., 2016) benchmarks in both the offline and data-efficient online RL settings. We verify the emergence of this phenomenon theoretically and characterize settings where implicit under-parameterization can emerge. We then show that mitigating this phenomenon via a simple penalty on the singular values of the learned features improves performance of value-based RL methods in the offline setting on Atari. 2 PRELIMINARIES The goal in RL is to maximize long-term discounted reward in a Markov decision process (MDP), defined as a tuple (S,A, R, P, γ) (Puterman, 1994), with state space S, action space A, a reward function R(s,a), transition dynamics P (s′|s,a) and a discount factor γ ∈ [0, 1). The Q-function Qπ(s,a) for a policy π(a|s), is the expected long-term discounted reward obtained by executing action a at state s and following π(a|s) thereafter, Qπ(s,a) := E [ ∑∞ t=0 γ tR(st,at)]. Qπ(s,a) is the fixed point of the Bellman operator T π , ∀s,a: T πQ(s,a) := R(s,a) + γEs′∼P (·|s,a),a′∼π(·|s′) [Q(s′,a′)], which can be written in vector form as: Qπ = R + γPπQπ . The optimal Q-function, Q∗(s,a), is the fixed point of the Bellman optimality operator T : T Q(s,a) := R(s,a) + γEs′∼P (·|s,a) [maxa′ Q(s′,a′)]. Practical Q-learning methods (e.g., Mnih et al., 2015; Hessel et al., 2018; Haarnoja et al., 2018) convert the Bellman equation into an bootstrapping-based objective for training a Q-network, Qθ, via gradient descent. This objective, known as mean-squared temporal difference (TD) error, is given by: L(θ) = ∑ s,a ( R(s,a) + γQ̄θ(s ′,a′)−Q(s,a) )2 , where Q̄θ is a delayed copy of the Q-function, typically referred to as the target network. These methods train Q-networks via gradient descent and slowly update the target network via Polyak averaging on its parameters. We refer the output of the penultimate layer of the deep Q-network as the learned feature matrix Φ, such that Q(s,a) = wTΦ(s,a), where w ∈ Rd and Φ ∈ R|S||A|×d. Algorithm 1 Fitted Q-Iteration (FQI) 1: Initialize Q-network Qθ , buffer µ. 2: for fitting iteration k in {1, . . . , N} do 3: Compute Qθ(s,a) and target values yk(s,a) = r + γmaxa′ Qk−1(s ′,a′) on {(s,a)} ∼ µ for training 4: Minimize TD error for Qθ via t = 1, · · · , T gradient descent updates, minθ (Qθ(s,a)− yk)2 5: end for For simplicity of analysis, we abstract deep Q-learning methods into a generic fitted Q-iteration (FQI) framework (Ernst et al., 2005). We refer to FQI with neural nets as neural FQI (Riedmiller, 2005). In the k-th fitting iteration, FQI trains the Q-function, Qk, to match the target values, yk = R+γPπQk−1 generated using previous Q-function, Qk−1 (Algorithm 1). Practical methods can be instantiated as variants of FQI, with different target update styles, different optimizers, etc. 3 IMPLICIT UNDER-PARAMETERIZATION IN DEEP Q-LEARNING In this section, we empirically demonstrate the existence of implicit under-parameterization in deep RL methods that use bootstrapping. We characterize implicit under-parameterization in terms of the effective rank (Yang et al., 2019) of the features learned by a Q-network. The effective rank of the feature matrix Φ, for a threshold δ (we choose δ = 0.01), denoted as srankδ(Φ), is given by srankδ(Φ) = min { k : ∑k i=1 σi(Φ)∑d i=1 σi(Φ) ≥ 1− δ } , where {σi(Φ)} are the singular values of Φ in decreasing order, i.e., σ1 ≥ · · · ≥ σd ≥ 0. Intuitively, srankδ(Φ) represents the number of “effective” unique components of the feature matrix Φ that form the basis for linearly approximating the Qvalues. When the network maps different states to orthogonal feature vectors, then srankδ(Φ) has high values close to d. When the network “aliases” state-action pairs by mapping them to a smaller subspace, Φ has only a few active singular directions, and srankδ(Φ) takes on a small value. Definition 1. Implicit under-parameterization refers to a reduction in the effective rank of the features, srankδ(Φ), that occurs implicitly as a by-product of learning deep neural network Q-functions. While rank decrease also occurs in supervised learning, it is usually beneficial for obtaining generalizable solutions (Gunasekar et al., 2017; Arora et al., 2019). However, we will show that in deep Q-learning, an interaction between bootstrapping and gradient descent can lead to more aggressive rank reduction (or rank collapse), which can hurt performance. Experimental setup. To study implicit under-parameterization empirically, we compute srankδ(Φ) on a minibatch of state-action pairs sampled i.i.d. from the training data (i.e., the dataset in the offline setting, and the replay buffer in the online setting). We investigate offline and online RL settings on benchmarks including Atari games (Bellemare et al., 2013) and Gym environments (Brockman et al., 2016). We also utilize gridworlds described by Fu et al. (2019) to compare the learned Q-function against the oracle solution computed using tabular value iteration. We evaluate DQN (Mnih et al., 2015) on gridworld and Atari and SAC (Haarnoja et al., 2018) on Gym domains. Data-efficient offline RL. In offline RL, our goal is to learn effective policies by performing Qlearning on a fixed dataset of transitions. We investigate the presence of rank collapse when deep Q-learning is used with broad state coverage offline datasets from Agarwal et al. (2020). In the top row of Figure 2, we show that after an initial learning period, srankδ(Φ) decreases in all domains (Atari, Gym and the gridworld). The final value of srankδ(Φ) is often quite small – e.g., in Atari, only 20-100 singular components are active for 512-dimensional features, implying significant underutilization of network capacity. Since under-parameterization is implicitly induced by the learning process, even high-capacity value networks behave as low-capacity networks as more training is performed with a bootstrapped objective (e.g., mean squared TD error). On the gridworld environment, regressing toQ∗ using supervised regression results in a much higher srankδ(Φ) (black dashed line in Figure 2(left)) than when using neural FQI. On Atari, even when a 4x larger offline dataset with much broader coverage is used (blue line in Figure 2), rank collapse still persists, indicating that implicit under-parameterization is not due to limited offline dataset size. Figure 2 (2nd row) illustrates that policy performance generally deteriorates as srank(Φ) drops, and eventually collapses simultaneously with the rank collapse. While we do not claim that implicit under-parameterization is the only issue in deep Q-learning, the results in Figure 2 show that the emergence of this under-parameterization is strongly associated with poor performance. To prevent confounding from the distribution mismatch between the learned policy and the offline dataset, which often affects the performance of Q-learning methods, we also study CQL (Kumar et al., 2020b), an offline RL algorithm designed to handle distribution mismatch. We find a similar degradation in effective rank and performance for CQL (Figure A.3), implying that underparameterization does not stem from distribution mismatch and arises even when the resulting policy is within the behavior distribution (though the policy may not be exactly pick actions observed in the dataset). We provide more evidence in Atari and Gym domains in Appendix A.1. Data-efficient online RL. Deep Q-learning methods typically use very few gradient updates (n) per environment step (e.g., DQN takes 1 update every 4 steps on Atari, n = 0.25). Improving the sample efficiency of these methods requires increasing n to utilize the replay data more effectively. However, we find that using larger values of n results in higher levels of rank collapse as well as performance degradation. In the top row of Figure 3, we show that larger values of n lead to a more aggressive drop in srankδ(Φ) (red vs. blue/orange lines), and that rank continues to decrease with more training. Furthermore, the bottom row illustrates that larger values of n result in worse performance, corroborating Fu et al. (2019); Fedus et al. (2020b). We find similar results with the Rainbow algorithm (Hessel et al., 2018) (Appendix A.2). As in the offline setting, directly regressing to Q∗ via supervised learning does not cause rank collapse (black line in Figure 3). 3.1 UNDERSTANDING IMPLICIT UNDER-PARAMETERIZATION AND ITS IMPLICATIONS How does implicit under-parameterization degrade performance? Having established the presence of rank collapse in data-efficient RL, we now discuss how it can adversely affect performance. As the effective rank of the network features Φ decreases, so does the network’s ability to fit the subsequent target values, and eventually results in inability to fit Q∗. In the gridworld domain, we measure this loss of expressivity by measuring the error in fitting oracle-computed Q∗ values via a linear transformation of Φ. When rank collapse occurs, the error in fitting Q∗ steadily increases during training, and the consequent network is not able to predict Q∗ at all by the end of training (Figure 4a) – this entails a drop in performance. In Atari domains, we do not have access to Q∗, and so we instead measure TD error, that is, the error in fitting the target value estimates, R + γPπQk. In SEAQUEST, as rank decreases, the TD error increases (Figure 4b) and the value function is unable to fit the target values, culminating in a performance plateau (Figure 3). This observation is consistent across other environments; we present further supporting evidence in Appendix A.4. Does bootstrapping cause implicit under-parameterization? We perform a number of controlled experiments in the gridworld and Atari environments to isolate the connection between rank collapse and bootstrapping. We first remove confounding issues of poor network initialization (Fedus et al., 2020a) and non-stationarity (Igl et al., 2020) by showing that rank collapse occurs even when the Q-network is re-initialized from scratch at the start of each fitting iteration (Figure 4c). To show that the problem is not isolated to the control setting, we show evidence of rank collapse in the policy evaluation setting as well. We trained a value network using fitted Q-evaluation for a fixed policy π (i.e., using the Bellman operator T π instead of T ), and found that rank drop still occurs (FQE in Figure 4d). Finally, we show that by removing bootstrapped updates and instead regressing directly to Monte-Carlo (MC) estimates of the value, the effective rank does not collapse (MC Returns in Figure 4d). These results, along with similar findings on other Atari environments (Appendix A.3), our analysis indicates that bootstrapping is at the core of implicit under-parameterization. 4 THEORETICAL ANALYSIS OF IMPLICIT UNDER-PARAMETERIZATION In this section, we formally analyze implicit under-parameterization and prove that training neural networks with bootstrapping reduces the effective rank of the Q-network, corroborating the empirical observations in the previous section. We focus on policy evaluation (Figure 4d and Figure A.9), where we aim to learn a Q-function that satisfies Q = R+γPπQ for a fixed π, for ease of analysis. We also presume a fixed dataset of transitions, D, to learn the Q-function. 4.1 ANALYSIS VIA KERNEL REGRESSION We first study bootstrapping with neural networks through a mathematical abstraction that treats the Q-network as a kernel machine, following the neural tangent kernel (NTK) formalism (Jacot et al., 2018). Building on prior analysis of self-distillation (Mobahi et al., 2020), we assume that each iteration of bootstrapping, the Q-function optimizes the squared TD error to target labels yk with a kernel regularizer. This regularizer captures the inductive bias from gradient-based optimization of TD error and resembles the regularization imposed by gradient descent under NTK (Mobahi et al., 2020). The error is computed on (si,ai) ∈ D whereas the regularization imposed by a universal kernel u with a coefficient of c ≥ 0 is applied to the Q-values at all state-action pairs as shown in Equation 1. We consider a setting c > 0 for all rounds of bootstrapping, which corresponds to the solution obtained by performing gradient descent on TD error for a small number of iterations with early stopping in each round (Suggala et al., 2018) and thus, resembles how the updates in Algorithm 1 are typically implemented in practice. Qk+1 ← arg min Q∈Q ∑ si,ai∈D (Q(si,ai)− yk(si,ai))2 + c ∑ (s,a) ∑ (s′,a′) u((s,a), (s′,a′))Q(s,a)Q(s′,a′). (1) The solution to Equation 1 can be expressed as Qk+1(s,a) = gT(s,a)(cI + G) −1yk, where G is the Gram matrix for a special positive-definite kernel (Duffy, 2015) and g(s,a) denotes the row of G corresponding to the input (s,a) (Mobahi et al., 2020, Proposition 1). A detailed proof is in Appendix C. When combined with the fitted Q-iteration recursion, setting labels yk = R + γPπQk−1, we recover a recurrence that relates subsequent value function iterates Qk+1 = G(cI + G) −1yk = G(cI + G) −1︸ ︷︷ ︸ A [R + γPπQk] = A (∑k i=1 γ k−i (PπA) k−i ) R := AMkR. (2) Equation 2 follows from unrolling the recurrence and setting the algorithm-agnostic initial Q-value, Q0, to be 0. We now show that the sparsity of singular values of the matrix Mk generally increases over fitting iterations, implying that the effective rank of Mk diminishes with more iterations. For this result, we assume that the matrix S is normal, i.e., the norm of the (complex) eigenvalues of S is equal to its singular values. We will discuss how this assumption can be relaxed in Appendix A.7. Theorem 4.1. Let S be a shorthand for S = γPπA and assume S is a normal matrix. Then there exists an infinite, strictly increasing sequence of fitting iterations, (kl)∞l=1 starting from k1 = 0, such that, for any two singular-values σi(S) and σj(S) of S with σi(S) < σj(S), ∀ l ∈ N and l′ ≥ l, σi(Mkl′ ) σj(Mkl′ ) < σi(Mkl) σj(Mkl) ≤ σi(S) σj(S) . (3) Hence, srankδ(Mkl′ ) ≤ srankδ(Mkl). Moreover, if S is positive semi-definite, then (kl) ∞ l=1 = N, i.e., srank continuously decreases in each fitting iteration. We provide a proof of the theorem above as well as present a stronger variant that shows a gradual decrease in the effective rank for fitting iterations outside this infinite sequence in Appendix C. As k increases along the sequence of iterations given by k = (kl)∞l=1, the effective rank of the matrix Mk drops, leading to low expressivity of this matrix. Since Mk linearly maps rewards to the Qfunction (Equation 2), drop in expressivity results of Mk in the inability to model the actual Qπ . Summary of our analysis. Our analysis of bootstrapping and gradient descent from the view of regularized kernel regression suggests that rank drop happens with more training (i.e., with more rounds of bootstrapping). In contrast to self-distillation (Mobahi et al., 2020), rank drop may not happen in every iteration (and rank may increase between two consecutive iterations occasionally), but srankδ exhibits a generally decreasing trend. 4.2 ANALYSIS WITH DEEP LINEAR NETWORKS UNDER GRADIENT DESCENT While Section 4.1 demonstrates rank collapse will occur in a kernel-regression model of Q-learning, it does not illustrate when the rank collapse occurs. To better specify a point in training when rank collapse emerges, we present a complementary derivation for the case when the Q-function is represented as a deep linear neural network (Arora et al., 2019), which is a widely-studied setting for analyzing implicit regularization of gradient descent in supervised learning (Gunasekar et al., 2017; 2018; Arora et al., 2018; 2019). Our analysis will show that rank collapse can emerge as the generated target values begin to approach the previous value estimate, in particular, when in the vicinity of the optimal Q-function. Proof strategy. Our proof consists of two steps: (1) We show that the effective rank of the feature matrix decreases within one fitting iteration (for a given target value) due to the low-rank affinity, (2) We show that this effective rank drop is “compounded” as we train using a bootstrapped objective. Proposition 4.1 explains (1) and Proposition 4.2, Theorem 4.2 and Appendix D.2 discuss (2). Additional notation and assumptions. We represent the Q-function as a deep linear network with at ≥ 3 layers, such that Q(s,a) = WNWφ[s;a], where N ≥ 3, WN ∈ R1×dN−1 and Wφ = WN−1WN−2 · · ·W1 with Wi ∈ Rdi×di−1 for i = 1, . . . , N − 1. Wφ maps an input [s;a] to corresponding penultimate layer’ features Φ(s,a). Let Wj(k, t) denotes the weight matrix Wj at the t-th step of gradient descent during the k-th fitting iteration (Algorithm 1). We define Wk,t = WN (k, t)Wφ(k, t) and LN,k+1(Wk,t) as the TD error objective in the k-th fitting iteration. We study srankδ(Wφ(k, t)) since the rank of features Φ = Wφ(k, t)[S,A] is equal to rank of Wφ(k, t) provided the state-action inputs have high rank. We assume that the evolution of the weights is governed by a continuous-time differential equation (Arora et al., 2018) within each fitting iteration k. To simplify analysis, we also assume that all except the last-layer weights follow a “balancedness” property (Equation D.4), which suggests that the weight matrices in the consecutive layers in the deep linear network share the same singular values (but with different permutations). However, note that we do not assume balancedness for the last layer which trivially leads to rank-1 features, making our analysis strictly more general than conventionally studied deep linear networks. In this model, we can characterize the evolution of the singular values of the feature matrix Wφ(k, t), using techniques analogous to Arora et al. (2019): Proposition 4.1. The singular values of the feature matrix Wφ(k, t) evolve according to: σ̇r(k, t) = −N · ( σ2r(k, t) )1− 1N−1 · 〈WN (k, t)T dLN,k+1(WK,t) dW ,ur(k, t)vr(k, t) T 〉 , (4) for r = 1, · · · ,minN−1i=1 di, where ur(k, t) and vr(k, t) denote the left and right singular vectors of the feature matrix, Wφ(k, t), respectively. 0 100 200 300 400 500 Gradient Updates 100 102 104 106 108 si ng ul ar v al ue s (l og s ca le ) Seaquest (Singular values) σmax σ2 σ3 σ10 σ100 Evolution of singular values of Wφ on SEAQUEST Solving the differential equation (4) indicates that larger singular values will evolve at a exponentially faster rate than smaller singular values (as we also formally show in Appendix D.1) and the difference in their magnitudes disproportionately increase with increasing t. This behavior also occurs empirically, illustrated in the figure on the right (also see Figure D.1), where larger singular values are orders of magnitude larger than smaller singular values. Hence, the effective rank, srankδ(Wφ(k, t)), will decrease with more gradient steps within a fitting iteration k. Abstract optimization problem for the low-rank solution. Building on Proposition 4.1, we note that the final solution obtained in a bootstrapping round (i.e., fitting iteration) can be equivalently expressed as the solution that minimizes a weighted sum of the TD error and a data-dependent implicit regularizer hD(Wφ,WN ) that encourages disproportionate singular values of Wφ, and hence, a low effective rank of Wφ. While the actual form for h is unknown, to facilitate our analysis of bootstrapping, we make a simplification and express this solution as the minimum of Equation 5. min Wφ,WN∈M ||WNWφ[s;a]− yk(s,a)||2 + λksrankδ(Wφ). (5) Note that the entire optimization path may not correspond to the objective in Equation 5, but the Equation 5 represents the final solution of a given fitting iteration. M denotes the set of constraints that WN obtained via gradient optimization of TD error must satisfy, however we do not need to explicitly quantifyM in our analysis. λk is a constant that denotes the strength of rank regularization. Since srankδ is always regularized, our analysis assumes that λk > 0 (see Appendix D.1). Rank drop within a fitting iteration “compounds” due to bootstrapping. In the RL setting, the target values are given by yk(s,a) = r(s,a) + γPπQk−1(s,a). First note that when r(s,a) = 0 and Pπ = I, i.e., when the bootstrapping update resembles self-regression, we first note that just “copying over weights” from iteration k− 1 to iteration k is a feasible point for solving Equation 5, which attains zero TD error with no increase in srankδ . A better solution to Equation 5 can thus be obtained by incurring non-zero TD error at the benefit of a decreased srank, indicating that in this setting, srankδ(Wφ) drops in each fitting iteration, leading to a compounding rank drop effect. We next extend this analysis to the full bootstrapping setting. Unlike the self-training setting, yk(s,a) is not directly expressible as a function of the previous Wφ(k, T ) due to additional reward and dynamics transformations. Assuming closure of the function class (Assumption D.1) under the Bellman update (Munos & Szepesvári, 2008; Chen & Jiang, 2019), we reason about the compounding effect of rank drop across iterations in Proposition 4.2 (proof in Appendix D.2). Specifically, srankδ can increase in each fitting iteration due to R and Pπ transformations, but will decrease due to low rank preference of gradient descent. This change in rank then compounds as shown below. Proposition 4.2. Assume that the Q-function is initialized to Wφ(0) and WN (0). Let the Q-function class be closed under the backup, i.e., ∃WPN ,WPφ , s.t. (R + γPπQk−1) T = WPN (k)W P φ (k)[S;A]T , and assume that the change in srank due to dynamics and reward transformations is bounded: srankδ(WPφ (k)) ≤ srankδ(Wφ(k − 1)) + ck. Then, srankδ(Wφ(k)) ≤ srankδ(Wφ(0)) + k∑ j=1 cj − k∑ j=1 ||Qj − yj || λj . Proposition 4.2 provides a bound on the value of srank after k rounds of bootstrapping. srank decreases in each iteration due to non-zero TD errors, but potentially increases due to reward and bootstrapping transformations. To instantiate a concrete case where rank clearly collapses, we investigate ck as the value function gets closer to the Bellman fixed point, which is a favourable initialization for the Q-function in Theorem 4.2. In this case, the learning dynamics begins to resemble the self-training regime, as the target values approach the previous value iterate yk ≈ Qk−1, and thus, as we show next, the potential increase in srank (ck in Proposition 4.2) converges to 0. Theorem 4.2. Suppose target values yk = R+γPπQk−1 are close to the previous value estimate Qk−1, i.e. ∀ s,a, yk(s,a) = Qk−1(s,a)+ε(s,a), with |ε(s,a)| |Qk−1(s,a)|. Then, there is a constant 0 depending upon WN and Wφ, such that for all ‖ε‖ < ε0, ck = 0. Thus, srank decreases in iteration k: srankδ(Wφ(k)) ≤ srankδ(Wφ(k − 1))− ||Qk − yk||/λk. We provide a complete form, including the expression for 0 and a proof in Appendix D.3. To empirically show the consequence of Theorem 4.2 that a decrease in srankδ(Wφ) values can lead to an increase in the distance to the fixed point in a neighborhood around the fixed point, we performed a controlled experiment on a deep linear net shown in Figure 5 that measures the relationship between of srankδ(Φ) and the error to the projected TD fixed point |Q −Q∗|. Note that a drop in srankδ(Φ) corresponds to a increased value of |Q−Q∗| indicating that rank drop when Q get close to a fixed point can affect convergence to it. 5 MITIGATING UNDER-PARAMETRIZATION IMPROVES DEEP Q-LEARNING We now show that mitigating implicit under-parameterization by preventing rank collapse can improve performance. We place special emphasis on the offline RL setting in this section, since it is particularly vulnerable to the adverse effects of rank collapse. We devise a penalty (or a regularizer) Lp(Φ) that encourages higher effective rank of the learned features, srankδ(Φ), to prevent rank collapse. The effective rank function srankδ(Φ) is non-differentiable, so we choose a simple surrogate that can be optimized over deep networks. Since effective rank is maximized when the magnitude of the singular values is roughly balanced, one way to increase effective rank is to minimize the largest singular value of Φ, σmax(Φ), while simultaneously maximizing the smallest singular value, σmin(Φ). We construct a simple penalty Lp(Φ) derived from this intuition, given by: Lp(Φ) = σ2max(Φ)− σ2min(Φ). (6) Lp(Φ) can be computed by invoking the singular value decomposition subroutines in standard automatic differentiation frameworks (Abadi et al., 2016; Paszke et al., 2019). We estimate the singular values over the feature matrix computed over a minibatch, and add the resulting value of Lp as a penalty to the TD error objective, with a tradeoff factor α = 0.001. Does Lp(Φ) address rank collapse? We first verify whether controlling the minimum and maximum singular values using Lp(Φ) actually prevents rank collapse. When using this penalty on the gridworld problem (Figure 6a), the effective rank does not collapse, instead gradually decreasing at the onset and then plateauing, akin to the evolution of effective rank in supervised learning. In Figure 6b, we plot the evolution of effective rank on two Atari games in the offline setting (all games in Appendix A.5), and observe that using Lp also generally leads to increasing rank values. Does mitigating rank collapse improve performance? We now evaluate the performance of the penalty using DQN (Mnih et al., 2015) and CQL (Kumar et al., 2020b) on Atari dataset from Agarwal et al. (2020) (5% replay data), used in Section 3. Figure 7 summarizes the relative improvement from using the penalty for 16 Atari games. Adding the penalty to DQN improves performance on all 16/16 games with a median improvement of 74.5%; adding it to CQL, a state-of-the-art offline algorithm, improves performance on 11/16 games with median improvement of 14.1%. Prior work has discussed that standard Q-learning methods designed for the online setting, such as DQN, are generally ineffective with small offline datasets (Kumar et al., 2020b; Agarwal et al., 2020). Our results show that mitigating rank collapse makes even such simple methods substantially more effective in this setting, suggesting that rank collapse and the resulting implicit under-parameterization may be an crucial piece of the puzzle in explaining the challenges of offline RL. Za xx on Ya rs Re ve ng e Po ng Sp ac eI nv ad er s Ro ad Ru nn er M sP ac m an As te rix Be am Ri de r Qb er t Ja m es bo nd En du ro W iza rd Of W or Ice Ho ck ey Se aq ue st Do ub le Du nk De m on At ta ck Game 0 100 101 102 103 % Im pr ov em en t ( Lo g sc al e) DQN DQN w/ Penalty Po ng Za xx on W iza rd Of W or Ya rs Re ve ng e Qb er t Ro ad Ru nn er M sP ac m an En du ro Ice Ho ck ey Ja m es bo nd Do ub le Du nk As te rix Sp ac eI nv ad er s Be am Ri de r Se aq ue st De m on At ta ck Game -102 -101 -100 0 100 101 102 103 % Im pr ov em en t ( Lo g sc al e) CQL CQL w/ Penalty Figure 7: DQN and CQL with Lp(Φ) penalty vs. their standard counterparts in the 5% offline setting on Atari from Section 3. Lp improves DQN on 16/16 and CQL on 11/16 games. We also evaluated the regularizer Lp(Φ) in the dataefficient online RL setting, with results in Appendix A.6. This variant achieved median improvement of 20.6% performance with Rainbow (Hessel et al., 2018), however performed poorly with DQN, where it reduced median performance by 11.5%. Thus, while our proposed penalty is effective in many cases in offline and online settings, it does not solve the problem fully, i.e., it does not address the root cause of implicit under-parameterization and only addresses a symptom, and a more sophisticated solution may better prevent the issues with implicit under-parameterization. Nevertheless, our results suggest that mitigation of implicit under-parameterization can improve performance of data-efficient RL. 6 RELATED WORK Prior work has extensively studied the learning dynamics of Q-learning with tabular and linear function approximation, to study error propagation (Munos, 2003; Farahmand et al., 2010) and to prevent divergence (De Farias, 2002; Maei et al., 2009; Sutton et al., 2009; Dai et al., 2018), as opposed to deep Q-learning analyzed in this work. Q-learning has been shown to have favorable optimization properties with certain classes of features (Ghosh & Bellemare, 2020), but our work shows that the features learned by a neural net when minimizing TD error do not enjoy such guarantees, and instead suffer from rank collapse. Recent theoretical analyses of deep Q-learning have shown convergence under restrictive assumptions (Yang et al., 2020; Cai et al., 2019; Zhang et al., 2020; Xu & Gu, 2019), but Theorem 4.2 shows that implicit under-parameterization appears when the estimates of the value function approach the optimum, potentially preventing convergence. Xu et al. (2005; 2007) present variants of LSTD (Boyan, 1999), which model the Q-function as a kernel-machine but do not take into account the regularization from gradient descent, as done in Equation 1, which is essential for implicit under-parameterization. Igl et al. (2020); Fedus et al. (2020a) argue that non-stationarity arising from distribution shift hinders generalization and recommend periodic network re-initialization. Under-parameterization is not caused by this distribution shift, and we find that network re-initialization does little to prevent rank collapse (Figure 4c). Luo et al. (2020) proposes a regularization similar to ours, but in a different setting, finding that more expressive features increases performance of on-policy RL methods. Finally, Yang et al. (2019) study the effective rank of the Q∗-values when expressed as a |S| × |A| matrix in online RL and find that low ranks for this Q∗-matrix are preferable. We analyze a fundamentally different object: the learned features (and illustrate that a rank-collapse of features can hurt), not the Q∗-matrix, whose rank is upper-bounded by the number of actions (e.g., 24 for Atari). 7 DISCUSSION We identified an implicit under-parameterization phenomenon in deep RL algorithms that use bootstrapping, where gradient-based optimization of a bootstrapped objective can lead to a reduction in the expressive power of the value network. This effect manifests as a collapse of the rank of the features learned by the value network, causing aliasing across states and often leading to poor performance. Our analysis reveals that this phenomenon is caused by the implicit regularization due to gradient descent on bootstrapped objectives. We observed that mitigating this problem by means of a simple regularization scheme improves performance of deep Q-learning methods. While our proposed regularization provides some improvement, devising better mitigation strategies for implicit under-parameterization remains an exciting direction for future work. Our method explicitly attempts to prevent rank collapse, but relies on the emergence of useful features solely through the bootstrapped signal. An alternative path may be to develop new auxiliary losses (e.g., Jaderberg et al., 2016) that learn useful features while passively preventing underparameterization. More broadly, understanding the effects of neural nets and associated factors such as initialization, choice of optimizer, etc. on the learning dynamics of deep RL algorithms, using tools from deep learning theory, is likely to be key towards developing robust and data-efficient deep RL algorithms. ACKNOWLEDGEMENTS We thank Lihong Li, Aaron Courville, Aurick Zhou, Abhishek Gupta, George Tucker, Ofir Nachum, Wesley Chung, Emmanuel Bengio, Zafarali Ahmed, and Jacob Buckman for feedback on an earlier version of this paper. We thank Hossein Mobahi for insightful discussions about self-distillation and Hanie Sedghi for insightful discussions about implicit regularization and generalization in deep networks. We additionally thank Michael Janner, Aaron Courville, Dale Schuurmans and Marc Bellemare for helpful discussions. AK was partly funded by the DARPA Assured Autonomy program, and DG was supported by a NSF graduate fellowship and compute support from Amazon. Appendices A ADDITIONAL EVIDENCE FOR IMPLICIT UNDER-PARAMETERIZATION In this section, we present additional evidence that demonstrates the existence of the implicit underparameterization phenomenon from Section 3. In all cases, we plot the values of srankδ(Φ) computed on a batch size of 2048 i.i.d. sampled transitions from the dataset. DQN (4x data) A.1 OFFLINE RL A.2 DATA EFFICIENT ONLINE RL In the data-efficient online RL setting, we verify the presence of implicit under-parameterization on both DQN and Rainbow (Hessel et al., 2018) algorithms when larger number of gradient updates are made per environment step. In these settings we find that more gradient updates per environment step lead to a larger decrease in effective rank, whereas effective rank can increase when the amount of data re-use is reduced by taking fewer gradient steps. A.3 DOES BOOTSTRAPPING CAUSE IMPLICIT UNDER-PARAMETERIZATION? In this section, we provide additional evidence to support our claim from Section 3 that suggests that bootstrapping-based updates are a key component behind the existence of implicit underparameterization. To do so, we empirically demonstrate the following points empirically: MC returns • For the final point in this section, we verify if the non-stationarity of the policy in the Qlearning (control) setting, i.e., when the Bellman optimality operator is used is not a reason behind the emergence of implicit under-parameterization. The non-stationary policy in a control setting causes the targets to change and, as a consequence, leads to non-zero errors. However, rank drop is primarily caused by bootstrapping rather than non-stationarity of the control objective. To illustrate this, we ran an experiment in the control setting on Gridworld, regressing to the target computed using the true value function Qπ for the current policy π (computed using tabular Q-evaluation) instead of using the bootstrap TD estimate. The results, shown in figure A.11a, show that the srankδ doesn’t decrease significantly when regressing to true control values and infact increases with more iterations as compared to Figure 6a where rank drops with bootstrapping. This experiment, alongside with experiments discussed above, ablating bootstrapping in the stationary policy evaluation setting shows that rank-deficiency is caused due to bootstrapping. A.4 HOW DOES IMPLICIT REGULARIZATION INHIBIT DATA-EFFICIENT RL? Implicit under-parameterization leads to a trade-off between minimizing the TD error vs. encouraging low rank features as shown in Figure 4b. This trade-off often results in decrease in effective rank, at the expense of increase in TD error, resulting in lower performance. Here we present additional evidence to support this. Figure A.11b shows a gridworld problem with one-hot features, which naturally leads to reduced state-aliasing. In this setting, we find that the amount of rank drop with respect to the supervised projection of oracle computed Q∗ values is quite small and the regression error to Q∗ actually decreases unlike the case in Figure 4a, where it remains same or even increases. The method is able to learn policies that attain good performance as well. Hence, this justifies that when there’s very little rank drop, for example, 5 rank units in the example on the right, FQI methods are generally able to learn Φ that is able to fit Q∗. This provides evidence showing that typical Q-networks learn Φ that can fit the optimal Q-function when rank collapse does not occur. In Atari, we do not have access to Q∗, and so we instead measure the error in fitting the target value estimates, R + γPπQk. As rank decreases, the TD error increases (Figure A.12) and the value function is unable to fit the target values, culminating in a performance plateau (Figure A.6). A.5 TRENDS IN VALUES OF EFFECTIVE RANK WITH PENALTY. In this section, we present the trend in the values of the effective rank when the penalty Lp(Φ) is added. In each plot below, we present the value of srankδ(Φ) with and without penalty respectively. A.5.1 OFFLINE RL: DQN A.5.2 OFFLINE RL: CQL WITH Lp(Φ) PENALTY A.6 DATA-EFFICIENT ONLINE RL: RAINBOW A.6.1 RAINBOW WITH Lp(Φ) PENALTY: RANK PLOTS A.6.2 RAINBOW WITH Lp(Φ) PENALTY: PERFORMANCE In this section, we present additional results for supporting the hypothesis that preventing rank-collapse leads to better performance. In the first set of experiments, we apply the proposed Lp penalty to Rainbow in the data-efficient online RL setting (n = 4). In the second set of experiments, we present evidence for prevention of rank collapse by comparing rank values for different runs. As we will show in the next section, the state-of-the-art Rainbow (Hessel et al., 2018) algorithm also suffers form rank collapse in the data-efficient online RL setting when more updates are performed per gradient step. In this section, we applied our penalty Lp to Rainbow with n = 4, and obtained a median 20.66% improvement on top of the base method. This result is summarized below. A.7 RELAXING THE NORMALITY ASSUMPTION IN THEOREM 4.1 We can relax the normality assumption on S in Theorem 4.1. An analogous statement holds for non-normal matrices S for a slightly different notion of effective rank, denoted as srankδ,λ(Mk), that utilizes eigenvalue norms instead of singular values. Formally, let λ1(Mk), · · · , λ2(Mk), · · · be the (complex) eigenvalues of Mk arranged in decreasing order of their norms, i.e., , |λ1(Mk)| ≥ |λ2(Mk)| ≥ · · · , then, srankδ,λ(Mk) = min { k : ∑k i=1 |λi(Mk)|∑d i=1 |λi(Mk)| ≥ 1− δ } . A statement essentially analogous to Theorem 4.1 suggests that in this general case, srankδ,λ(Mk) decreases for all (complex) diagonalizable matrices S, which is the set of almost all matrices of size dim(S). Now, if S is approximately normal, i.e., when |σi(S)− |λi(S)|| is small, then the result in Theorem 4.1 also holds approximately as we discuss at the end of Appendix C. We now provide empirical evidence showing that the trend in the values of effective rank computed using singular values, srankδ(Φ) is almost identical to the trend in the effective rank computed using normalized eigenvalues, srankδ,λ(Φ). Since eigenvalues are only defined for a square matrix Φ, in practice, we use a batch of d = dim(φ(s,a)) state-action pairs for computing the eigenvalue rank and compare to the corresponding singular value rank in Figures A.20 and A.21. Connection to Theorem 4.1. We computed the effective rank of Φ instead of S, since S is a theoretical abstraction that cannot be computed in practice as it depends on the Green’s kernel (Duffy, 2015) obtained by assuming that the neural network behaves as a kernel regressor. Instead, we compare the different notions of ranks of Φ since Φ is the practical counterpart for the matrix, S, when using neural networks (as also indicated by the analysis in Section 4.2). In fact, on the gridworld (Figure A.21), we experiment with a feature Φ with dimension equal to the number of state-action pairs, i.e., dim(φ(s,a)) = |S||A|, with the same number of parameters as a kernel parameterization of the Q-function: Q(s,a) = ∑ s′,a′ w(s ′,a′)k(s,a, s′,a′). This can also be considered as performing gradient descent on a “wide” linear network , and we measure the feature rank while observing similar rank trends. Since we do not require the assumption that S is normal in Theorem 4.1 to obtain a decreasing trend in srankδ,λ(Φ), and we find that in practical scenarios (Figures A.20 and A.21), srankδ(Φ) ≈ srankδ,λ(Φ) with an extremely similar qualitative trend we believe that Theorem 4.1 still explains the rank-collapse practically observed in deep Q-learning and is not vacuous. A.8 NORMALIZED PLOTS FOR FIGURE 3/ FIGURE A.6 In this section, we provide a set of normalized srank and performance trends for Atari games (the corresponding unnormalized plots are found in Figure A.6). In these plots, each unit on the x-axis is equivalent to one gradient update, and so since n = 8 prescribes 8× many updates as compared to n = 1, it it runs for 8× as long as n = 1. These plots are in Figure A.22. Note that the trend that effective rank decreases with larger n values also persists when rescaling the x-axis to account for the number of gradient steps, in all but one game. This is expected since it tells us that performing bootstrapping based updates in the data-efficient setting (larger n values) still leads to more aggressive rank drop as updates are being performed on a relatively more static dataset for larger values of n. B HYPERPARAMETERS & EXPERIMENT DETAILS B.1 ATARI EXPERIMENTS We follow the experiment protocol from Agarwal et al. (2020) for all our experiments including hyperparameters and agent architectures provided in Dopamine and report them for completeness and ease of reproducibility in Table B.1. We only use hyperparameter selection over the regularization experiment αp based on results from 5 Atari games (Asterix, Seaquest, Pong, Breakout and Seaquest). We will also open source our code to further aid in reproducing our results. Evaluation Protocol. Following Agarwal et al. (2020), the Atari environments used in our experiments are stochastic due to sticky actions, i.e., there is 25% chance at every time step that the environment will execute the agent’s previous action again, instead of the agent’s new action. All agents (online or offline) are compared using the best evaluation score (averaged over 5 runs) achieved during training where the evaluation is done online every training iteration using a -greedy policy with = 0.001. We report offline training results with same hyperparameters over 5 random seeds of the DQN replay data collection, game simulator and network initialization. Offline Dataset. As suggested by Agarwal et al. (2020), we randomly subsample the DQN Replay dataset containing 50 millions transitions to create smaller offline datasets with the same data distribution as the original dataset. We use the 5% DQN replay dataset for most of our experiments. We also report results using the 20% dataset setting (4x larger) to show that our claims are also valid even when we have higher coverage over the state space. Optimizer related hyperparameters. For existing off-policy agents, step size and optimizer were taken as published. We used the DQN (Adam) algorithm for all our experiments, given its superior performance over the DQN (Nature) which uses RMSProp, as reported by Agarwal et al. (2020). Atari 2600 games used. For all our experiments in Section 3, we used the same set of 5 games as utilized by Agarwal et al. (2020); Bellemare et al. (2017) to present analytical results. For our empirical evaluation in Appendix A.5, we use the set of games employed by Fedus et al. (2020b) which are deemed suitable for offline RL by Gulcehre et al. (2020). Similar in spirit to Gulcehre et al. (2020), we use the set of 5 games used for analysis for hyperparameter tuning for offline RL methods. 5 games subset: ASTERIX, QBERT, PONG, SEAQUEST, BREAKOUT 16 game subset: In addition to 5 games above, the following 11 games: DOUBLE DUNK, JAMES BOND, MS. PACMAN, SPACE INVADERS, ZAXXON, WIZARD OF WOR, YARS’ REVENGE, ENDURO, ROAD RUNNER, BEAMRIDER, DEMON ATTACK B.2 GRIDWORLD EXPERIMENTS We use the gridworld suite from Fu et al. (2019) to obtain gridworlds for our experiments. All of our gridworld results are computed using the 16 × 16 GRID16SMOOTHOBS environment, which consists of a 256-cell grid, with walls arising randomly with a probability of 0.2. Each state allows 5 different actions (subject to hitting the boundary of the grid): move left, move right, move up, move down and no op. The goal in this environment is to minimize the cumulative discounted distance to a fixed goal location where the discount factor is given by γ = 0.95. The features for this Q-function are given by randomly chosen vectors which are smoothened spatially in a local neighborhood of a grid cell (x, y). We use a deep Q-network with two hidden layers of size (64, 64), and train it using soft Q-learning with entropy coefficient of 0.1, following the code provided by authors of Fu et al. (2019). We use a first-in-first out replay buffer of size 10000 to store past transitions. C PROOFS FOR SECTION 4.1 In this section, we provide the technical proofs from Section 4.1. We first derive a solution to optimization problem Equation 1 and show that it indeed comes out to have the form described in Equation 2. We first introduce some notation, including definition of the kernel G which was used for this proof. This proof closely follows the proof from Mobahi et al. (2020). Definitions. For any universal kernel u, the Green’s function (Duffy, 2015) of the linear kernel operator L given by: [LQ] (s,a) := ∑ (s′,a′) u((s,a), (s ′,a′))Q(s′,a′) is given by the function g((s,a), (s′,a′)) that satisfies:∑ (s,a) u((s,a), (s′,a′)) g((s′,a′), (s̄, ā)) = δ((s,a)− (s̄, ā)), (C.1) where δ is the Dirac-delta function. Thus, Green’s function can be understood as a kernel that “inverts” the universal kernel u to the identity (Dirac-delta) matrix. We can then define the matrix G as the matrix of vectors g(s,a) evaluated on the training dataset, D, however note that the functional g(s,a) can be evaluated for other state-action tuples, not present in D. G((si,ai), (sj ,aj)) := g((si,ai), (sj ,aj)) and g(s,a)[i] = g((s,a), (si,ai)) ∀(si,ai) ∈ D. (C.2) Lemma C.0.1. The solution to Equation 1 is given by Equation 2. Proof. This proof closely follows the proof of Proposition 1 from (Mobahi et al., 2020). We revisit key aspects the key parts of this proof here. We restate the optimization problem below, and solve for the optimum Qk to this equation by applying the functional derivative principle. min Q∈Q J(Q) := ∑ si,ai∈D (Q(si,ai)− yk(si,ai))2 + c ∑ (s,a) ∑ (s′,a′) u((s,a), (s′,a′))Q(s,a)Q(s′,a′). The functional derivative principle would say that the optimal Qk to this problem would satisfy, for any other function f and for a small enough ε→ 0, ∀f ∈ Q : ∂J(Qk + εf) ∂ε ∣∣∣ ε=0 = 0. (C.3) By setting the gradient of the above expression to 0, we obtain the following stationarity conditions on Qk (also denoting (si,ai) := xi) for brevity:∑ xi∈D δ(x− xi) (Qk(xi)− yk(xi)) + c ∑ x u(x,x′)Qk(x ′) = 0. (C.4) Now, we invoke the definition of the Green’s function discussed above and utilize the fact that the Dirac-delta function can be expressed in terms of the Green’s function, we obtain a simplified version of the above relation:∑ x u(x,x′) ∑ xi∈D (Qk(xi)− yk(xi))g(x′,xi) = −c ∑ x u(x,x′)Qk(x ′). (C.5) Since the kernel u(x,x′) is universal and positive definite, the optimal solution Qk(x) is given by: Qk(s,a) = − 1 c ∑ (si,ai)∈D (Qk(si,ai)− yk(si,ai)) · g((s,a), (si,ai)). (C.6) Finally we can replace the expression for residual error, Qk(si,ai) − yk(si,ai) using the green’s kernel on the training data by solving for it in closed form, which gives us the solution in Equation 2. Qk(s,a) = − 1 c gT(s,a)(Qk − yk) = g T (s,a)(cI + G) −1yk. (C.7) Next, we now state and prove a slightly stronger version of Theorem 4.1 that immediately implies the original theorem. Theorem C.1. Let S be a shorthand for S = γPπA and assume S is a normal matrix. Then there exists an infinite, strictly increasing sequence of fitting iterations, (kl)∞l=1 starting from k1 = 0, such that, for any two singular-values σi(S) and σj(S) of S with σi(S) ≤ σj(S), ∀ l ∈ N and l′ ≥ l, σi(Mkl′ ) σj(Mkl′ ) < σi(Mkl) σj(Mkl) ≤ σi(S) σj(S) . (C.8) Therefore, the effective rank of Mk satisfies: srankδ(Mkl′ ) ≤ srankδ(Mkl). Furthermore, ∀ l ∈ N and t ≥ kl, σi(Mt) σj(Mt) < σi(Mkl) σj(Mkl) +O (( σi(S) σj(S) )kl) . (C.9) Therefore, the effective rank of Mt, srankδ(Mt), outside the chosen subsequence is also controlled above by the effective rank on the subsequence (srankδ(Mkl)) ∞ l=1. To prove this theorem, we first show that for any two fitting iterations, t < t′, if St and St ′ are positive semi-definite, the ratio of singular values and the effective rank decreases from t to t′. As an immediate consequence, this shows that when S is positive semi-definite, the effective rank decreases at every iteration, i.e., by setting kl = l (Corollary C.1.1). To extend the proof to arbitrary normal matrices, we show that for any S, a sequence of fitting iterations (kl)∞l=1 can be chosen such that S kl is (approximately) positive semi-definite. For this subsequence of fitting iterations, the ratio of singular values and effective rank also decreases. Finally, to control the ratio and effective rank on fitting iterations t outside this subsequence, we construct an upper bound on the ratio f(t): σi(Mt)σj(Mt) < f(t), and relate this bound to the ratio of singular values on the chosen subsequence. Lemma C.1.1 (srankδ(Mk) decreases when Sk is PSD.). Let S be a shorthand for S = γPπA and assume S is a normal matrix. Choose any t, t′ ∈ N such that t < t′. If St and St′ are positive semi-definite, then for any two singular-values σi(S) and σj(S) of S, such that 0 < σi(S) < σj(S): σi(Mt′) σj(Mt′) < σi(Mt) σj(Mt) ≤ σi(S) σj(S) . (C.10) Hence, the effective rank of Mk decreases from t to t′: srankδ(Mt′) ≤ srankδ(Mt). Proof. First note that Mk is given by: Mk := k∑ i=1 γk−i(PπA)k−i = k∑ i=1 Sk−i. (C.11) From hereon, we omit the leading γ term since it is a constant scaling factor that does not affect ratio or effective rank. Almost every matrix S admits a complex orthogonal eigendecomposition. Thus, we can write S := Uλ(S)UH . And any power of S, i.e., , Si can be expressed as: Si = Uλ(S)iUH , and hence, we can express Mk as: Mk := U ( k−1∑ i=0 λ(S)i ) UH = U · diag ( 1− λ(S)k 1− λ(S) ) · UH . (C.12) Since S is normal, its eigenvalues and singular values are further related as σk(S) = |λk(S)|. And this also means that Mk is normal, indicating that σi(Mk) = |λi(Mk)|. Thus, the singular values of Mk can be expressed as σi(Mk) := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.13) When Sk is positive semi-definite, λi(S)k = σi(S)k, enabling the following simplification: σi(Mk) = |1− σi(S)k| |1− λi(S)| . (C.14) To show that the ratio of singular values decreases from t to t′, we need to show that f(σ) = |1−σ t′ | |1−σt| is an increasing function of σ when t′ > t. It can be seen that this is the case, which implies the desired result. To further show that srankδ(Mt) ≥ srankδ(Mt′), we can simply show that ∀i ∈ [1, · · · , n], hk(i) := ∑i j=1 σj(Mk)∑n j=1 σj(Mk) increases with k, and this would imply that the srankδ(Mk) cannot increase from k = t to k = t′. We can decompose hk(i) as: hk(i) = i∑ j=1 σj(Mk)∑ l σl(Mk) = 1 1 + ∑n j=i+1 σj(Mk)∑i j=1 σj(Mk) . (C.15) Since σj(Mk)/σl(Mi) decreases over time k for all j, l if σj(S) ≤ σl(S), the ratio in the denominator of hk(i) decreases with increasing k implying that hk(i) increases from t to t′. Corollary C.1.1 (srankδ(Mk) decreases for PSD S matrices.). Let S be a shorthand for S = γPπA. Assuming that S is positive semi-definite, for any k, t ∈ N, such that t > k and that for any two singular-values σi(S) and σj(S) of S, such that σi(S) < σj(S), σi(Mt) σj(Mt) < σi(Mk) σj(Mk) ≤ σi(S) σj(S) . (C.16) Hence, the effective rank of Mk decreases with more fitting iterations: srankδ(Mt) ≤ srankδ(Mk). In order to now extend the result to arbitrary normal matrices, we must construct a subsequence of fitting iterations (kl)∞l=1 where S kl is (approximately) positive semi-definite. To do so, we first prove a technical lemma that shows that rational numbers, i.e., numbers that can be expressed as r = pq , for integers p, q ∈ Z are “dense” in the space of real numbers. Lemma C.1.2 (Rational numbers are dense in the real space.). For any real number α, there exist infinitely many rational numbers pq such that α can be approximated by p q upto 1 q2 accuracy.∣∣∣∣α− pq ∣∣∣∣ ≤ 1q2 . (C.17) Proof. We first use Dirichlet’s approximation theorem (see Hlawka et al. (1991) for a proof of this result using a pigeonhole argument and extensions) to obtain that for any real numbers α andN ≥ 1, there exist integers p and q such that 1 ≤ q ≤ N and, |qα− p| ≤ 1 |N |+ 1 < 1 N . (C.18) Now, since q ≥ 1 > 0, we can divide both sides by q, to obtain:∣∣∣∣α− pq ∣∣∣∣ ≤ 1Nq ≤ 1q2 . (C.19) To obtain infinitely many choices for pq , we observe that Dirichlet’s lemma is valid only for all values of N that satisfy N ≤ 1|qα−p| . Thus if we choose an N ′ such that N ′ ≥ Nmax where Nmax is defined as: Nmax = max { 1 |q′α− p′| ∣∣∣ p′, q′ ∈ Z, 1 ≤ q′ ≤ q} . (C.20) Equation C.20 essentially finds a new value of N , such that the current choices of p and q, which were valid for the first value ofN do not satisfy the approximation error bound. Applying Dirichlet’s lemma to this new value of N ′ hence gives us a new set of p′ and q′ which satisfy the 1q′2 approximation error bound. Repeating this process gives us countably many choices of (p, q) pairs that satisfy the approximation error bound. As a result, rational numbers are dense in the space of real numbers, since for any arbitrarily chosen approximation accuracy given by 1q2 , we can obtain atleast one rational number, pq which is closer to α than 1 q2 . This proof is based on Johnson (2016). Now we utilize Lemmas C.1.1 and C.1.2 to prove Proposition 4.1. Proof of Proposition 4.1 and Theorem C.1 Recall from the proof of Lemma C.1.1 that the singular values of Mk are given by: σi(Mk) := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.21) Bound on Singular Value Ratio: The ratio between σi(Mk) and σj(Mk) can be expressed as σi(Mk) σj(Mk) = ∣∣∣∣ 1− λi(S)k1− λj(S)k ∣∣∣∣ ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ . (C.22) For a normal matrix S, σi(S) = |λi(S)|, so this ratio can be bounded above as σi(Mk) σj(Mk) ≤ 1 + σi(S) k |1− σj(S)k| ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ . (C.23) Defining f(k) to be the right hand side of the equation, we can verify that f is a monotonically decreasing function in k when σi < σj . This shows that this ratio of singular values in bounded above and in general, must decrease towards some limit limk→∞ f(k). Construction of Subsequence: We now show that there exists a subsequence (kl)∞l=1 for which Skl is approximately positive semi-definite. For ease of notation, let’s represent the i-th eigenvalue as λi(S) = |λi(S)| · eiθi , where θi > 0 is the polar angle of the complex value λi(s) and |λi(S)| is its magnitude (norm). Now, using Lemma C.1.2, we can approximate any polar angle, θi using a rational approximation, i.e., , we apply lemma C.1.2 on θi2π ∃ pi, qi ∈ N, s.t. ∣∣∣∣ θi2π − piqi ∣∣∣∣ ≤ 1q2i . (C.24) Since the choice of qi is within our control we can estimate θi for all eigenvalues λi(S) to infinitesimal accuracy. Hence, we can approximate θi ≈ 2π piqi . We will now use this approximation to construct an infinite sequence (kl)∞l=1, shown below: kl = l · LCM(q1, · · · , qn) ∀ j ∈ N, (C.25) where LCM is the least-common-multiple of natural numbers q1, · · · qn. In the absence of any approximation error in θi, we note that for any i and for any l ∈ N as defined above, λi(S)kl = |λi(S)|kl · exp ( 2iπ · piqi · kl ) = |λi(S)|kl , since the polar angle for any kl is going to be a multiple of 2π, and hence it would fall on the real line. As a result, Skl will be positive semi-definite, since all eigenvalues are positive and real. Now by using the proof for Lemma C.1.1, we obtain the ratio of i and j singular values are increasing over the sequence of iterations (kj)∞j=1. Since the approximation error in θi can be controlled to be infinitesimally small to prevent any increase in the value of srankδ due to it (this can be done given the discrete form of srankδ), we note that the above argument applies even with the approximation, proving the required result on the subsequence. Controlling All Fitting Iterations using Subsequence: We now relate the ratio of singular values within this chosen subsequence to the ratio of singular values elsewhere. Choose t, l ∈ N such that t > kl. Earlier in this proof, we showed that the ratio between singular values is bounded above by a monotonically decreasing function f(t), so σi(Mt) σj(Mt) ≤ f(t) < f(kl). (C.26) Now, we show that that f(kl) is in fact very close to the ratio of singular values: f(kl) = |1− σi(S)kl | |1− σj(S)kl | ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ ≤ σi(Mt)σj(Mt) + 2σi(S) kl |1− σj(S)kl | ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣. (C.27) The second term goes to zero as kl increases; algebraic manipulation shows that this gap be bounded by f(kl) ≤ σi(Mkl) σj(Mkl) + ( σi(S) σj(S) )kl 2σj(S) |1− σj(S)| ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣︸ ︷︷ ︸ constant . (C.28) Putting these inequalities together proves the final statement, σi(Mt) σj(Mt) ≤ σi(Mkl) σj(Mkl) +O (( σi(S) σj(S) )kl) . (C.29) Extension to approximately-normal S. We can extend the result in Theorem C.1 (and hence also Theorem 4.1) to approximately-normal S. Note that the main requirement for normality of S (i.e., σi(S) = |λi(s)|) is because it is straightforward to relate the eigenvalue of S to M as shown below. |λi(Mk)| := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.30) Now, since the matrix S is approximately normal, we can express it using its Schur’s triangular form as, S = U · (Λ + N) ·UH , where Λ is a diagonal matrix and N is an “offset” matrix. The departure from normality of S is defined as: ∆(S) := infN ||N||2, where the infimum is computed over all matrices N that can appear in the Schur triangular form for S. For a normal S only a single value of N = 0 satisfies the Schur’s triangular form. For an approximately normal matrix S, ||N||2 ≤ ∆(S) ≤ ε, for a small ε. Furthermore note that from Equation 6 in Ruhe (1975), we obtain that |σi(S)− |λi(S)|| ≤ ∆(S) ≤ ε, (C.31) implying that singular values and norm-eigenvalues are close to each other for S. Next, let us evaluate the departure from normality of Mk. First note that, Sj = U · (Λ +N)j ·UH , and so, Mk = U · (∑k j=1(Λ + N) j ) ·UH and if ||N||2 ≤ ε, for a small epsilon (i.e., considering only terms that are linear in N for (Λ + N)j), we note that: |σi(Mk)− |λi(Mk)|| ≤ k∑ j=1 j · |λ1(S)|j−1∆(S) ≤ 1 (1− |λ1(S)|)2 · ε. (C.32) Thus, the matrix Mk is also approximately normal provided that the max eigenvalue norm of S is less than 1. This is true, since S = γPπA (see Theorem 4.1, where both Pπ and A have eigenvalues less than 1, and γ < 1. Given that we have shown that Mk is approximately normal, we can show that srankδ(Mk) only differs from srankδ,λ(Mk), i.e., , the effective rank of eigenvalues, in a bounded amount. If the value of ε is then small enough, we still retain the conclusion that srankδ(Mk) generally decreases with more training by following the proof of Theorem C.1. D PROOFS FOR SECTION 4.2 In this section, we provide technical proofs from Section 4.2. We start by deriving properties of optimization trajectories of the weight matrices of the deep linear network similar to Arora et al. (2018) but customized to our set of assumptions, then prove Proposition 4.1, and finally discuss how to extend these results to the fitted Q-iteration setting and some extensions not discussed in the main paper. Similar to Section 4.1, we assume access to a dataset of transitions, D = {(si,ai, r(si,ai), s′i} in this section, and assume that the same data is used to re-train the function. Notation and Definitions. The Q-function is represented using a deep linear network with at least 3 layers, such that Q(s,a) = WNWN−1 · · ·W1[s;a], where N ≥ 3,WN ∈ R1×dN−1 , (D.1) and Wi ∈ Rdi×di−1 for i = 1, . . . , N − 1. We index the weight matrices by a tuple (k, t): Wj(k, t) denotes the weight matrix Wj at the t-th step of gradient descent during the k-th fitting iteration (Algorithm 1). Let the end-to-end weight matrix WNWN−1 · · ·W1 be denoted shorthand as WN :1, and let the features of the penultimate layer of the network, be denoted as Wφ(k, t) := WN−1(k, t) · · ·W1(k, t). Wφ(k, t) is the matrix that maps an input [s;a] to corresponding features Φ(s,a). In our analysis, it is sufficient to consider the effective rank of Wφ(k, t) since the features Φ are given by: Φ(k, t) = Wφ(k, t)[S;A], which indicates that: rank(Φ(k, t)) = rank(Wφ(k, t)[S;A]) ≤ min (rank(Wφ(k, t)), rank([S;A])) . Assuming the state-action space has full rank, we are only concerned about rank(Wφ(k, t)) which justifies our choice for analyzing srankδ(Wφ(k, t)). Let Lk+1(WN :1(k, t)) denote the mean squared Bellman error optimization objective in the k-th fitting iteration. Lk+1(WN :1(k, t)) = |D|∑ i=1 (WN (k, t)Wφ(k, t)[si;ai]− yk(si,ai))2 , where yk = R + γPπQk. When gradient descent is used to update the weight matrix, the updates to Wi(k, t) are given by: Wj(k, t+ 1)←Wj(k, t)− η ∂Lk+1(WN :1(k, t)) ∂Wj(k, t) . If the learning rate η is small, we can approximate this discrete time process with a continuous-time differential equation, which we will use for our analysis. We use Ẇ (k, t) to denote the derivative of W (k, t) with respect to t, for a given k. Ẇj(k, t) = −η ∂Lk+1(WN :1(k, t)) ∂Wj(k, t) (D.2) In order to quantify the evolution of singular values of the weight matrix, Wφ(k, t), we start by quantifying the evolution of the weight matrix Wφ(k, t) using a more interpretable differential equation. In order to do so, we make an assumption similar to but not identical as Arora et al. (
1. What is the main contribution of the paper regarding bootstrapping and function approximation in reinforcement learning? 2. What are the strengths and weaknesses of the theoretical contributions in the paper? 3. How does the paper isolate an interesting phenomenon related to rank collapse in bootstrapping? 4. Are there any concerns about the assumptions made in the theoretical contributions? If so, which ones and why? 5. Can the authors elaborate on the purpose of certain sections in the paper, such as the section on explaining implicit under-parameterization across fitting iterations? 6. How does the balanced assumption used in the deep linear networks affect the applicability of the results to practical neural networks? 7. Are there any typos or unclear statements in the paper that should be addressed?
Review
Review Post discussion review Summary The authors present evidence that the approximate rank of the features is correlated with the learned policy's performance and that this rank shrinks when using bootstrapping. They provide empirical evidence in several RL settings and domains and present some theoretical arguments which explain this behavior in the context of kernel regression and deep linear networks. Finally, they propose a simple approach for mitigating the rank collapse and show that this improves the performance of the learned policy in some cases. Reason for score The authors isolate an interesting phenomenon and present some compelling empirical evidence. This is interesting work and I have have no doubt that it is of sufficient quality for publications. Pros The main contributions of this work might help us better understand the effects of using bootstrapping with function approximation and gradient descent, a critical aspect of many RL methods. Using neural nets to learn (Q-)value functions on novel domains is still to this day a frustrating experience due to how unstable and unpredictable gradient descent + bootstrapping is. As a result, this subject of this work is quite important and likely of great interest to the field. The experiments are well designed and relevant to the main thesis. The empirical results are well presented and easy to understand. Cons After a very productive and enlightening discussion with the authors, the only noteworthy issue is that this paper contains too many contributions for the format making some of them hard to appreciate. A more focused in depth dive into a subset of the theoretical contributions might have been preferable and possibly provide more insight. Conclusion I strongly support the acceptance of this submission. After discussion with the authors and resulting updates to the paper, I don't see any reason for rejecting this paper. All of the major concerns from my initial review have been addressed. Initial review Summary The authors present evidence that the approximate rank of the features is correlated with the learned policy's performance and that this rank shrinks when using bootstrapping. They provide empirical evidence in several RL settings and domains and present some theoretical arguments which explain this behavior in the context of kernel regression and deep linear networks. Finally, they propose a simple approach for mitigating the rank collapse and show that this improves the performance of the learned policy in some cases. Reason for score Although the authors isolate an interesting phenomenon and present some compelling empirical evidence, I have a few concerns about the theoretical contributions which, hopefully, the authors can address or clarify any misunderstanding. This is interesting work and I am more than willing to adjust my review if the authors can assuage my concerns. Pros The main contributions of this work might help us better understand the effects of using bootstrapping with function approximation and gradient descent, a critical aspect of many RL methods. Using neural nets to learn (Q-)value functions on novel domains is still to this day a frustrating experience due to how unstable and unpredictable gradient descent + bootstrapping is. As a result, this subject of this work is quite important and likely of great interest to the field. The experiments are well designed and relevant to the main thesis. The empirical results are well presented and easy to understand. Cons This did not feel like an 8 page paper. This paper took a long time to review. With 18 pages of appendix, 9 of which are clarifications and proofs, what is left of the theoretical contributions in the main body of the paper doesn't provide much insight into the role/importance of the assumptions or into what makes each claim true. The proof for theorem 4.2 appears to make use of the assumption that ε ( s , a ) = W N ⋅ ζ [ s ; a ] and y k = Q k − 1 + ε . This is not conveyed in the main body of the paper but seems to be a fairly strong assumption on the form of the bootstrapped targets. Similarly, I would argue that the premise that the bootstrapped targets will eventually be close to the previous, i.e., y k ≈ Q k − 1 , is flawed. There is no guarantee that applying the Bellman operator will return a function that is inside your function class, even in the linear case. Furthermore, we know this phenomenon to be significant and motivated work on the projected Bellman error, a concept heavily used by the various variants of gradient temporal difference learning. In theorem 4.1, the assumption that S is a normal matrix seems impractical and likely makes this result only applicable to very rare cases. In proposition 4.1. it isn't immediately apparent where in the proof the assumption that the loss L is the TD loss is leveraged. If it isn't used, this would suggest that this is a general property of deep linear networks and wouldn't support the authors observations that the rank issues are specific to bootstrapping. Questions for the authors Was anything done to "normalize" the results in figure 2 to account for the differing number of total updates as a result of different n? Can these observations be explained by the fact that more updates results in the parameters traveling further from their initial values? What happens when plotting the srank vs. # of updates in the setting? (These likely don't need 3 distinct answers) Could the authors elaborate on why the normal matrix assumption might be reasonable, or, otherwise, explain why this doesn't make it a vacuous result? What is the purpose of the "explaining implicit under-parameterization across fitting iterations" section? I think I am missing the insight this is trying to provide. Why would the parameters change at all if I reuse the results of the previous minimization as targets? What does the Bellman error refer to here and what does it mean to attain zero (or any value) TD error when the targets are just the Q-values? The balanced assumption used with the deep linear networks seems critical for the proof. Is my assessment correct or could these results possibly hold without it? How does this assumption limit the applicability of the insight gained here to more practical neural networks? Misc comments and typos page 2, Yang et al. don't seem to use the term "effective rank", but do use the term "approximate rank". page 4, "we first remove the confounding with issues [...]" page 30, proof, it would help to explicit state the dimensions of ζ . Is the ⊤ on ζ ⊤ a typo? Otherwise, why is it not used further down? (no need to answer either way, just reporting on something that tripped me up)
ICLR
Title Implicit Under-Parameterization Inhibits Data-Efficient Deep Reinforcement Learning Abstract We identify an implicit under-parameterization phenomenon in value-based deep RL methods that use bootstrapping: when value functions, approximated using deep neural networks, are trained with gradient descent using iterated regression onto target values generated by previous instances of the value network, more gradient updates decrease the expressivity of the current value network. We characterize this loss of expressivity via a drop in the rank of the learned value network features, and show that this typically corresponds to a performance drop. We demonstrate this phenomenon on Atari and Gym benchmarks, in both offline and online RL settings. We formally analyze this phenomenon and show that it results from a pathological interaction between bootstrapping and gradient-based optimization. We further show that mitigating implicit under-parameterization by controlling rank collapse can improve performance. 1 INTRODUCTION Many pervasive deep reinforcement learning (RL) algorithms estimate value functions using bootstrapping, that is, by sequentially fitting value functions to target value estimates generated from the value function learned in the previous iteration. Despite high-profile achievements (Silver et al., 2017), these algorithms are highly unreliable due to poorly understood optimization issues. Although a number of hypotheses have been proposed to explain these issues (Achiam et al., 2019; Bengio et al., 2020; Fu et al., 2019; Igl et al., 2020; Liu et al., 2018; Kumar et al., 2020a), a complete understanding remains elusive. We identify an “implicit under-parameterization” phenomenon that emerges when value networks are trained using gradient descent combined with bootstrapping. This phenomenon manifests as an excessive aliasing of features learned by the value network across states, which is exacerbated with more gradient updates. While the supervised deep learning literature suggests that some feature aliasing is desirable for generalization (e.g., Gunasekar et al., 2017; Arora et al., 2019), implicit under-parameterization exhibits more pronounced aliasing than in supervised learning. This over-aliasing causes an otherwise expressive value network to implicitly behave as an under-parameterized network, often resulting in poor performance. Implicit under-parameterization becomes aggravated when the rate of data re-use is increased, restricting the sample efficiency of deep RL methods. In online RL, increasing the number of gradient steps in between data collection steps for data-efficient RL (Fu et al., 2019; Fedus et al., 2020b) causes the problem to emerge more frequently. In the extreme case when no additional data is collected, referred to as offline RL (Lange et al., 2012; Agarwal et al., 2020; Levine et al., 2020), implicit under-parameterization manifests consistently, limiting the viability of offline methods. We demonstrate the existence of implicit under-parameterization in common value-based deep RL methods, including Q-learning (Mnih et al., 2015; Hessel et al., 2018) and actor-critic (Haarnoja et al., 2018), as well as neural fitted-Q iteration (Riedmiller, 2005; Ernst et al., 2005). To isolate the issue, we study the effective rank of the features in the penultimate layer of the value network (Section 3). We observe that after an initial learning period, the rank of the learned features drops steeply. As the rank decreases, the ability of the features to fit subsequent target values and the optimal value function generally deteriorates and results in a sharp decrease in performance (Section 3.1). ∗Equal Contribution. Correspondence to Aviral Kumar < [email protected] > and Rishabh Agarwal < [email protected] >. To better understand the emergence of implicit under-parameterization, we formally study the dynamics of Q-learning under two distinct models of neural net behavior (Section 4): kernel regression (Jacot et al., 2018; Mobahi et al., 2020) and deep linear networks (Arora et al., 2018). We corroborate the existence of this phenomenon in both models, and show that implicit underparameterization stems from a pathological interaction between bootstrapping and the implicit regularization of gradient descent. Since value networks are trained to regress towards targets generated by a previous version of the same model, this leads to a sequence of value networks of potentially decreasing expressivity, which can result in degenerate behavior and a drop in performance. The main contribution of this work is the identification of implicit under-parameterization in deep RL methods that use bootstrapping. Empirically, we demonstrate a collapse in the rank of the learned features during training, and show it typically corresponds to a drop in performance in the Atari (Bellemare et al., 2013) and continuous control Gym (Brockman et al., 2016) benchmarks in both the offline and data-efficient online RL settings. We verify the emergence of this phenomenon theoretically and characterize settings where implicit under-parameterization can emerge. We then show that mitigating this phenomenon via a simple penalty on the singular values of the learned features improves performance of value-based RL methods in the offline setting on Atari. 2 PRELIMINARIES The goal in RL is to maximize long-term discounted reward in a Markov decision process (MDP), defined as a tuple (S,A, R, P, γ) (Puterman, 1994), with state space S, action space A, a reward function R(s,a), transition dynamics P (s′|s,a) and a discount factor γ ∈ [0, 1). The Q-function Qπ(s,a) for a policy π(a|s), is the expected long-term discounted reward obtained by executing action a at state s and following π(a|s) thereafter, Qπ(s,a) := E [ ∑∞ t=0 γ tR(st,at)]. Qπ(s,a) is the fixed point of the Bellman operator T π , ∀s,a: T πQ(s,a) := R(s,a) + γEs′∼P (·|s,a),a′∼π(·|s′) [Q(s′,a′)], which can be written in vector form as: Qπ = R + γPπQπ . The optimal Q-function, Q∗(s,a), is the fixed point of the Bellman optimality operator T : T Q(s,a) := R(s,a) + γEs′∼P (·|s,a) [maxa′ Q(s′,a′)]. Practical Q-learning methods (e.g., Mnih et al., 2015; Hessel et al., 2018; Haarnoja et al., 2018) convert the Bellman equation into an bootstrapping-based objective for training a Q-network, Qθ, via gradient descent. This objective, known as mean-squared temporal difference (TD) error, is given by: L(θ) = ∑ s,a ( R(s,a) + γQ̄θ(s ′,a′)−Q(s,a) )2 , where Q̄θ is a delayed copy of the Q-function, typically referred to as the target network. These methods train Q-networks via gradient descent and slowly update the target network via Polyak averaging on its parameters. We refer the output of the penultimate layer of the deep Q-network as the learned feature matrix Φ, such that Q(s,a) = wTΦ(s,a), where w ∈ Rd and Φ ∈ R|S||A|×d. Algorithm 1 Fitted Q-Iteration (FQI) 1: Initialize Q-network Qθ , buffer µ. 2: for fitting iteration k in {1, . . . , N} do 3: Compute Qθ(s,a) and target values yk(s,a) = r + γmaxa′ Qk−1(s ′,a′) on {(s,a)} ∼ µ for training 4: Minimize TD error for Qθ via t = 1, · · · , T gradient descent updates, minθ (Qθ(s,a)− yk)2 5: end for For simplicity of analysis, we abstract deep Q-learning methods into a generic fitted Q-iteration (FQI) framework (Ernst et al., 2005). We refer to FQI with neural nets as neural FQI (Riedmiller, 2005). In the k-th fitting iteration, FQI trains the Q-function, Qk, to match the target values, yk = R+γPπQk−1 generated using previous Q-function, Qk−1 (Algorithm 1). Practical methods can be instantiated as variants of FQI, with different target update styles, different optimizers, etc. 3 IMPLICIT UNDER-PARAMETERIZATION IN DEEP Q-LEARNING In this section, we empirically demonstrate the existence of implicit under-parameterization in deep RL methods that use bootstrapping. We characterize implicit under-parameterization in terms of the effective rank (Yang et al., 2019) of the features learned by a Q-network. The effective rank of the feature matrix Φ, for a threshold δ (we choose δ = 0.01), denoted as srankδ(Φ), is given by srankδ(Φ) = min { k : ∑k i=1 σi(Φ)∑d i=1 σi(Φ) ≥ 1− δ } , where {σi(Φ)} are the singular values of Φ in decreasing order, i.e., σ1 ≥ · · · ≥ σd ≥ 0. Intuitively, srankδ(Φ) represents the number of “effective” unique components of the feature matrix Φ that form the basis for linearly approximating the Qvalues. When the network maps different states to orthogonal feature vectors, then srankδ(Φ) has high values close to d. When the network “aliases” state-action pairs by mapping them to a smaller subspace, Φ has only a few active singular directions, and srankδ(Φ) takes on a small value. Definition 1. Implicit under-parameterization refers to a reduction in the effective rank of the features, srankδ(Φ), that occurs implicitly as a by-product of learning deep neural network Q-functions. While rank decrease also occurs in supervised learning, it is usually beneficial for obtaining generalizable solutions (Gunasekar et al., 2017; Arora et al., 2019). However, we will show that in deep Q-learning, an interaction between bootstrapping and gradient descent can lead to more aggressive rank reduction (or rank collapse), which can hurt performance. Experimental setup. To study implicit under-parameterization empirically, we compute srankδ(Φ) on a minibatch of state-action pairs sampled i.i.d. from the training data (i.e., the dataset in the offline setting, and the replay buffer in the online setting). We investigate offline and online RL settings on benchmarks including Atari games (Bellemare et al., 2013) and Gym environments (Brockman et al., 2016). We also utilize gridworlds described by Fu et al. (2019) to compare the learned Q-function against the oracle solution computed using tabular value iteration. We evaluate DQN (Mnih et al., 2015) on gridworld and Atari and SAC (Haarnoja et al., 2018) on Gym domains. Data-efficient offline RL. In offline RL, our goal is to learn effective policies by performing Qlearning on a fixed dataset of transitions. We investigate the presence of rank collapse when deep Q-learning is used with broad state coverage offline datasets from Agarwal et al. (2020). In the top row of Figure 2, we show that after an initial learning period, srankδ(Φ) decreases in all domains (Atari, Gym and the gridworld). The final value of srankδ(Φ) is often quite small – e.g., in Atari, only 20-100 singular components are active for 512-dimensional features, implying significant underutilization of network capacity. Since under-parameterization is implicitly induced by the learning process, even high-capacity value networks behave as low-capacity networks as more training is performed with a bootstrapped objective (e.g., mean squared TD error). On the gridworld environment, regressing toQ∗ using supervised regression results in a much higher srankδ(Φ) (black dashed line in Figure 2(left)) than when using neural FQI. On Atari, even when a 4x larger offline dataset with much broader coverage is used (blue line in Figure 2), rank collapse still persists, indicating that implicit under-parameterization is not due to limited offline dataset size. Figure 2 (2nd row) illustrates that policy performance generally deteriorates as srank(Φ) drops, and eventually collapses simultaneously with the rank collapse. While we do not claim that implicit under-parameterization is the only issue in deep Q-learning, the results in Figure 2 show that the emergence of this under-parameterization is strongly associated with poor performance. To prevent confounding from the distribution mismatch between the learned policy and the offline dataset, which often affects the performance of Q-learning methods, we also study CQL (Kumar et al., 2020b), an offline RL algorithm designed to handle distribution mismatch. We find a similar degradation in effective rank and performance for CQL (Figure A.3), implying that underparameterization does not stem from distribution mismatch and arises even when the resulting policy is within the behavior distribution (though the policy may not be exactly pick actions observed in the dataset). We provide more evidence in Atari and Gym domains in Appendix A.1. Data-efficient online RL. Deep Q-learning methods typically use very few gradient updates (n) per environment step (e.g., DQN takes 1 update every 4 steps on Atari, n = 0.25). Improving the sample efficiency of these methods requires increasing n to utilize the replay data more effectively. However, we find that using larger values of n results in higher levels of rank collapse as well as performance degradation. In the top row of Figure 3, we show that larger values of n lead to a more aggressive drop in srankδ(Φ) (red vs. blue/orange lines), and that rank continues to decrease with more training. Furthermore, the bottom row illustrates that larger values of n result in worse performance, corroborating Fu et al. (2019); Fedus et al. (2020b). We find similar results with the Rainbow algorithm (Hessel et al., 2018) (Appendix A.2). As in the offline setting, directly regressing to Q∗ via supervised learning does not cause rank collapse (black line in Figure 3). 3.1 UNDERSTANDING IMPLICIT UNDER-PARAMETERIZATION AND ITS IMPLICATIONS How does implicit under-parameterization degrade performance? Having established the presence of rank collapse in data-efficient RL, we now discuss how it can adversely affect performance. As the effective rank of the network features Φ decreases, so does the network’s ability to fit the subsequent target values, and eventually results in inability to fit Q∗. In the gridworld domain, we measure this loss of expressivity by measuring the error in fitting oracle-computed Q∗ values via a linear transformation of Φ. When rank collapse occurs, the error in fitting Q∗ steadily increases during training, and the consequent network is not able to predict Q∗ at all by the end of training (Figure 4a) – this entails a drop in performance. In Atari domains, we do not have access to Q∗, and so we instead measure TD error, that is, the error in fitting the target value estimates, R + γPπQk. In SEAQUEST, as rank decreases, the TD error increases (Figure 4b) and the value function is unable to fit the target values, culminating in a performance plateau (Figure 3). This observation is consistent across other environments; we present further supporting evidence in Appendix A.4. Does bootstrapping cause implicit under-parameterization? We perform a number of controlled experiments in the gridworld and Atari environments to isolate the connection between rank collapse and bootstrapping. We first remove confounding issues of poor network initialization (Fedus et al., 2020a) and non-stationarity (Igl et al., 2020) by showing that rank collapse occurs even when the Q-network is re-initialized from scratch at the start of each fitting iteration (Figure 4c). To show that the problem is not isolated to the control setting, we show evidence of rank collapse in the policy evaluation setting as well. We trained a value network using fitted Q-evaluation for a fixed policy π (i.e., using the Bellman operator T π instead of T ), and found that rank drop still occurs (FQE in Figure 4d). Finally, we show that by removing bootstrapped updates and instead regressing directly to Monte-Carlo (MC) estimates of the value, the effective rank does not collapse (MC Returns in Figure 4d). These results, along with similar findings on other Atari environments (Appendix A.3), our analysis indicates that bootstrapping is at the core of implicit under-parameterization. 4 THEORETICAL ANALYSIS OF IMPLICIT UNDER-PARAMETERIZATION In this section, we formally analyze implicit under-parameterization and prove that training neural networks with bootstrapping reduces the effective rank of the Q-network, corroborating the empirical observations in the previous section. We focus on policy evaluation (Figure 4d and Figure A.9), where we aim to learn a Q-function that satisfies Q = R+γPπQ for a fixed π, for ease of analysis. We also presume a fixed dataset of transitions, D, to learn the Q-function. 4.1 ANALYSIS VIA KERNEL REGRESSION We first study bootstrapping with neural networks through a mathematical abstraction that treats the Q-network as a kernel machine, following the neural tangent kernel (NTK) formalism (Jacot et al., 2018). Building on prior analysis of self-distillation (Mobahi et al., 2020), we assume that each iteration of bootstrapping, the Q-function optimizes the squared TD error to target labels yk with a kernel regularizer. This regularizer captures the inductive bias from gradient-based optimization of TD error and resembles the regularization imposed by gradient descent under NTK (Mobahi et al., 2020). The error is computed on (si,ai) ∈ D whereas the regularization imposed by a universal kernel u with a coefficient of c ≥ 0 is applied to the Q-values at all state-action pairs as shown in Equation 1. We consider a setting c > 0 for all rounds of bootstrapping, which corresponds to the solution obtained by performing gradient descent on TD error for a small number of iterations with early stopping in each round (Suggala et al., 2018) and thus, resembles how the updates in Algorithm 1 are typically implemented in practice. Qk+1 ← arg min Q∈Q ∑ si,ai∈D (Q(si,ai)− yk(si,ai))2 + c ∑ (s,a) ∑ (s′,a′) u((s,a), (s′,a′))Q(s,a)Q(s′,a′). (1) The solution to Equation 1 can be expressed as Qk+1(s,a) = gT(s,a)(cI + G) −1yk, where G is the Gram matrix for a special positive-definite kernel (Duffy, 2015) and g(s,a) denotes the row of G corresponding to the input (s,a) (Mobahi et al., 2020, Proposition 1). A detailed proof is in Appendix C. When combined with the fitted Q-iteration recursion, setting labels yk = R + γPπQk−1, we recover a recurrence that relates subsequent value function iterates Qk+1 = G(cI + G) −1yk = G(cI + G) −1︸ ︷︷ ︸ A [R + γPπQk] = A (∑k i=1 γ k−i (PπA) k−i ) R := AMkR. (2) Equation 2 follows from unrolling the recurrence and setting the algorithm-agnostic initial Q-value, Q0, to be 0. We now show that the sparsity of singular values of the matrix Mk generally increases over fitting iterations, implying that the effective rank of Mk diminishes with more iterations. For this result, we assume that the matrix S is normal, i.e., the norm of the (complex) eigenvalues of S is equal to its singular values. We will discuss how this assumption can be relaxed in Appendix A.7. Theorem 4.1. Let S be a shorthand for S = γPπA and assume S is a normal matrix. Then there exists an infinite, strictly increasing sequence of fitting iterations, (kl)∞l=1 starting from k1 = 0, such that, for any two singular-values σi(S) and σj(S) of S with σi(S) < σj(S), ∀ l ∈ N and l′ ≥ l, σi(Mkl′ ) σj(Mkl′ ) < σi(Mkl) σj(Mkl) ≤ σi(S) σj(S) . (3) Hence, srankδ(Mkl′ ) ≤ srankδ(Mkl). Moreover, if S is positive semi-definite, then (kl) ∞ l=1 = N, i.e., srank continuously decreases in each fitting iteration. We provide a proof of the theorem above as well as present a stronger variant that shows a gradual decrease in the effective rank for fitting iterations outside this infinite sequence in Appendix C. As k increases along the sequence of iterations given by k = (kl)∞l=1, the effective rank of the matrix Mk drops, leading to low expressivity of this matrix. Since Mk linearly maps rewards to the Qfunction (Equation 2), drop in expressivity results of Mk in the inability to model the actual Qπ . Summary of our analysis. Our analysis of bootstrapping and gradient descent from the view of regularized kernel regression suggests that rank drop happens with more training (i.e., with more rounds of bootstrapping). In contrast to self-distillation (Mobahi et al., 2020), rank drop may not happen in every iteration (and rank may increase between two consecutive iterations occasionally), but srankδ exhibits a generally decreasing trend. 4.2 ANALYSIS WITH DEEP LINEAR NETWORKS UNDER GRADIENT DESCENT While Section 4.1 demonstrates rank collapse will occur in a kernel-regression model of Q-learning, it does not illustrate when the rank collapse occurs. To better specify a point in training when rank collapse emerges, we present a complementary derivation for the case when the Q-function is represented as a deep linear neural network (Arora et al., 2019), which is a widely-studied setting for analyzing implicit regularization of gradient descent in supervised learning (Gunasekar et al., 2017; 2018; Arora et al., 2018; 2019). Our analysis will show that rank collapse can emerge as the generated target values begin to approach the previous value estimate, in particular, when in the vicinity of the optimal Q-function. Proof strategy. Our proof consists of two steps: (1) We show that the effective rank of the feature matrix decreases within one fitting iteration (for a given target value) due to the low-rank affinity, (2) We show that this effective rank drop is “compounded” as we train using a bootstrapped objective. Proposition 4.1 explains (1) and Proposition 4.2, Theorem 4.2 and Appendix D.2 discuss (2). Additional notation and assumptions. We represent the Q-function as a deep linear network with at ≥ 3 layers, such that Q(s,a) = WNWφ[s;a], where N ≥ 3, WN ∈ R1×dN−1 and Wφ = WN−1WN−2 · · ·W1 with Wi ∈ Rdi×di−1 for i = 1, . . . , N − 1. Wφ maps an input [s;a] to corresponding penultimate layer’ features Φ(s,a). Let Wj(k, t) denotes the weight matrix Wj at the t-th step of gradient descent during the k-th fitting iteration (Algorithm 1). We define Wk,t = WN (k, t)Wφ(k, t) and LN,k+1(Wk,t) as the TD error objective in the k-th fitting iteration. We study srankδ(Wφ(k, t)) since the rank of features Φ = Wφ(k, t)[S,A] is equal to rank of Wφ(k, t) provided the state-action inputs have high rank. We assume that the evolution of the weights is governed by a continuous-time differential equation (Arora et al., 2018) within each fitting iteration k. To simplify analysis, we also assume that all except the last-layer weights follow a “balancedness” property (Equation D.4), which suggests that the weight matrices in the consecutive layers in the deep linear network share the same singular values (but with different permutations). However, note that we do not assume balancedness for the last layer which trivially leads to rank-1 features, making our analysis strictly more general than conventionally studied deep linear networks. In this model, we can characterize the evolution of the singular values of the feature matrix Wφ(k, t), using techniques analogous to Arora et al. (2019): Proposition 4.1. The singular values of the feature matrix Wφ(k, t) evolve according to: σ̇r(k, t) = −N · ( σ2r(k, t) )1− 1N−1 · 〈WN (k, t)T dLN,k+1(WK,t) dW ,ur(k, t)vr(k, t) T 〉 , (4) for r = 1, · · · ,minN−1i=1 di, where ur(k, t) and vr(k, t) denote the left and right singular vectors of the feature matrix, Wφ(k, t), respectively. 0 100 200 300 400 500 Gradient Updates 100 102 104 106 108 si ng ul ar v al ue s (l og s ca le ) Seaquest (Singular values) σmax σ2 σ3 σ10 σ100 Evolution of singular values of Wφ on SEAQUEST Solving the differential equation (4) indicates that larger singular values will evolve at a exponentially faster rate than smaller singular values (as we also formally show in Appendix D.1) and the difference in their magnitudes disproportionately increase with increasing t. This behavior also occurs empirically, illustrated in the figure on the right (also see Figure D.1), where larger singular values are orders of magnitude larger than smaller singular values. Hence, the effective rank, srankδ(Wφ(k, t)), will decrease with more gradient steps within a fitting iteration k. Abstract optimization problem for the low-rank solution. Building on Proposition 4.1, we note that the final solution obtained in a bootstrapping round (i.e., fitting iteration) can be equivalently expressed as the solution that minimizes a weighted sum of the TD error and a data-dependent implicit regularizer hD(Wφ,WN ) that encourages disproportionate singular values of Wφ, and hence, a low effective rank of Wφ. While the actual form for h is unknown, to facilitate our analysis of bootstrapping, we make a simplification and express this solution as the minimum of Equation 5. min Wφ,WN∈M ||WNWφ[s;a]− yk(s,a)||2 + λksrankδ(Wφ). (5) Note that the entire optimization path may not correspond to the objective in Equation 5, but the Equation 5 represents the final solution of a given fitting iteration. M denotes the set of constraints that WN obtained via gradient optimization of TD error must satisfy, however we do not need to explicitly quantifyM in our analysis. λk is a constant that denotes the strength of rank regularization. Since srankδ is always regularized, our analysis assumes that λk > 0 (see Appendix D.1). Rank drop within a fitting iteration “compounds” due to bootstrapping. In the RL setting, the target values are given by yk(s,a) = r(s,a) + γPπQk−1(s,a). First note that when r(s,a) = 0 and Pπ = I, i.e., when the bootstrapping update resembles self-regression, we first note that just “copying over weights” from iteration k− 1 to iteration k is a feasible point for solving Equation 5, which attains zero TD error with no increase in srankδ . A better solution to Equation 5 can thus be obtained by incurring non-zero TD error at the benefit of a decreased srank, indicating that in this setting, srankδ(Wφ) drops in each fitting iteration, leading to a compounding rank drop effect. We next extend this analysis to the full bootstrapping setting. Unlike the self-training setting, yk(s,a) is not directly expressible as a function of the previous Wφ(k, T ) due to additional reward and dynamics transformations. Assuming closure of the function class (Assumption D.1) under the Bellman update (Munos & Szepesvári, 2008; Chen & Jiang, 2019), we reason about the compounding effect of rank drop across iterations in Proposition 4.2 (proof in Appendix D.2). Specifically, srankδ can increase in each fitting iteration due to R and Pπ transformations, but will decrease due to low rank preference of gradient descent. This change in rank then compounds as shown below. Proposition 4.2. Assume that the Q-function is initialized to Wφ(0) and WN (0). Let the Q-function class be closed under the backup, i.e., ∃WPN ,WPφ , s.t. (R + γPπQk−1) T = WPN (k)W P φ (k)[S;A]T , and assume that the change in srank due to dynamics and reward transformations is bounded: srankδ(WPφ (k)) ≤ srankδ(Wφ(k − 1)) + ck. Then, srankδ(Wφ(k)) ≤ srankδ(Wφ(0)) + k∑ j=1 cj − k∑ j=1 ||Qj − yj || λj . Proposition 4.2 provides a bound on the value of srank after k rounds of bootstrapping. srank decreases in each iteration due to non-zero TD errors, but potentially increases due to reward and bootstrapping transformations. To instantiate a concrete case where rank clearly collapses, we investigate ck as the value function gets closer to the Bellman fixed point, which is a favourable initialization for the Q-function in Theorem 4.2. In this case, the learning dynamics begins to resemble the self-training regime, as the target values approach the previous value iterate yk ≈ Qk−1, and thus, as we show next, the potential increase in srank (ck in Proposition 4.2) converges to 0. Theorem 4.2. Suppose target values yk = R+γPπQk−1 are close to the previous value estimate Qk−1, i.e. ∀ s,a, yk(s,a) = Qk−1(s,a)+ε(s,a), with |ε(s,a)| |Qk−1(s,a)|. Then, there is a constant 0 depending upon WN and Wφ, such that for all ‖ε‖ < ε0, ck = 0. Thus, srank decreases in iteration k: srankδ(Wφ(k)) ≤ srankδ(Wφ(k − 1))− ||Qk − yk||/λk. We provide a complete form, including the expression for 0 and a proof in Appendix D.3. To empirically show the consequence of Theorem 4.2 that a decrease in srankδ(Wφ) values can lead to an increase in the distance to the fixed point in a neighborhood around the fixed point, we performed a controlled experiment on a deep linear net shown in Figure 5 that measures the relationship between of srankδ(Φ) and the error to the projected TD fixed point |Q −Q∗|. Note that a drop in srankδ(Φ) corresponds to a increased value of |Q−Q∗| indicating that rank drop when Q get close to a fixed point can affect convergence to it. 5 MITIGATING UNDER-PARAMETRIZATION IMPROVES DEEP Q-LEARNING We now show that mitigating implicit under-parameterization by preventing rank collapse can improve performance. We place special emphasis on the offline RL setting in this section, since it is particularly vulnerable to the adverse effects of rank collapse. We devise a penalty (or a regularizer) Lp(Φ) that encourages higher effective rank of the learned features, srankδ(Φ), to prevent rank collapse. The effective rank function srankδ(Φ) is non-differentiable, so we choose a simple surrogate that can be optimized over deep networks. Since effective rank is maximized when the magnitude of the singular values is roughly balanced, one way to increase effective rank is to minimize the largest singular value of Φ, σmax(Φ), while simultaneously maximizing the smallest singular value, σmin(Φ). We construct a simple penalty Lp(Φ) derived from this intuition, given by: Lp(Φ) = σ2max(Φ)− σ2min(Φ). (6) Lp(Φ) can be computed by invoking the singular value decomposition subroutines in standard automatic differentiation frameworks (Abadi et al., 2016; Paszke et al., 2019). We estimate the singular values over the feature matrix computed over a minibatch, and add the resulting value of Lp as a penalty to the TD error objective, with a tradeoff factor α = 0.001. Does Lp(Φ) address rank collapse? We first verify whether controlling the minimum and maximum singular values using Lp(Φ) actually prevents rank collapse. When using this penalty on the gridworld problem (Figure 6a), the effective rank does not collapse, instead gradually decreasing at the onset and then plateauing, akin to the evolution of effective rank in supervised learning. In Figure 6b, we plot the evolution of effective rank on two Atari games in the offline setting (all games in Appendix A.5), and observe that using Lp also generally leads to increasing rank values. Does mitigating rank collapse improve performance? We now evaluate the performance of the penalty using DQN (Mnih et al., 2015) and CQL (Kumar et al., 2020b) on Atari dataset from Agarwal et al. (2020) (5% replay data), used in Section 3. Figure 7 summarizes the relative improvement from using the penalty for 16 Atari games. Adding the penalty to DQN improves performance on all 16/16 games with a median improvement of 74.5%; adding it to CQL, a state-of-the-art offline algorithm, improves performance on 11/16 games with median improvement of 14.1%. Prior work has discussed that standard Q-learning methods designed for the online setting, such as DQN, are generally ineffective with small offline datasets (Kumar et al., 2020b; Agarwal et al., 2020). Our results show that mitigating rank collapse makes even such simple methods substantially more effective in this setting, suggesting that rank collapse and the resulting implicit under-parameterization may be an crucial piece of the puzzle in explaining the challenges of offline RL. Za xx on Ya rs Re ve ng e Po ng Sp ac eI nv ad er s Ro ad Ru nn er M sP ac m an As te rix Be am Ri de r Qb er t Ja m es bo nd En du ro W iza rd Of W or Ice Ho ck ey Se aq ue st Do ub le Du nk De m on At ta ck Game 0 100 101 102 103 % Im pr ov em en t ( Lo g sc al e) DQN DQN w/ Penalty Po ng Za xx on W iza rd Of W or Ya rs Re ve ng e Qb er t Ro ad Ru nn er M sP ac m an En du ro Ice Ho ck ey Ja m es bo nd Do ub le Du nk As te rix Sp ac eI nv ad er s Be am Ri de r Se aq ue st De m on At ta ck Game -102 -101 -100 0 100 101 102 103 % Im pr ov em en t ( Lo g sc al e) CQL CQL w/ Penalty Figure 7: DQN and CQL with Lp(Φ) penalty vs. their standard counterparts in the 5% offline setting on Atari from Section 3. Lp improves DQN on 16/16 and CQL on 11/16 games. We also evaluated the regularizer Lp(Φ) in the dataefficient online RL setting, with results in Appendix A.6. This variant achieved median improvement of 20.6% performance with Rainbow (Hessel et al., 2018), however performed poorly with DQN, where it reduced median performance by 11.5%. Thus, while our proposed penalty is effective in many cases in offline and online settings, it does not solve the problem fully, i.e., it does not address the root cause of implicit under-parameterization and only addresses a symptom, and a more sophisticated solution may better prevent the issues with implicit under-parameterization. Nevertheless, our results suggest that mitigation of implicit under-parameterization can improve performance of data-efficient RL. 6 RELATED WORK Prior work has extensively studied the learning dynamics of Q-learning with tabular and linear function approximation, to study error propagation (Munos, 2003; Farahmand et al., 2010) and to prevent divergence (De Farias, 2002; Maei et al., 2009; Sutton et al., 2009; Dai et al., 2018), as opposed to deep Q-learning analyzed in this work. Q-learning has been shown to have favorable optimization properties with certain classes of features (Ghosh & Bellemare, 2020), but our work shows that the features learned by a neural net when minimizing TD error do not enjoy such guarantees, and instead suffer from rank collapse. Recent theoretical analyses of deep Q-learning have shown convergence under restrictive assumptions (Yang et al., 2020; Cai et al., 2019; Zhang et al., 2020; Xu & Gu, 2019), but Theorem 4.2 shows that implicit under-parameterization appears when the estimates of the value function approach the optimum, potentially preventing convergence. Xu et al. (2005; 2007) present variants of LSTD (Boyan, 1999), which model the Q-function as a kernel-machine but do not take into account the regularization from gradient descent, as done in Equation 1, which is essential for implicit under-parameterization. Igl et al. (2020); Fedus et al. (2020a) argue that non-stationarity arising from distribution shift hinders generalization and recommend periodic network re-initialization. Under-parameterization is not caused by this distribution shift, and we find that network re-initialization does little to prevent rank collapse (Figure 4c). Luo et al. (2020) proposes a regularization similar to ours, but in a different setting, finding that more expressive features increases performance of on-policy RL methods. Finally, Yang et al. (2019) study the effective rank of the Q∗-values when expressed as a |S| × |A| matrix in online RL and find that low ranks for this Q∗-matrix are preferable. We analyze a fundamentally different object: the learned features (and illustrate that a rank-collapse of features can hurt), not the Q∗-matrix, whose rank is upper-bounded by the number of actions (e.g., 24 for Atari). 7 DISCUSSION We identified an implicit under-parameterization phenomenon in deep RL algorithms that use bootstrapping, where gradient-based optimization of a bootstrapped objective can lead to a reduction in the expressive power of the value network. This effect manifests as a collapse of the rank of the features learned by the value network, causing aliasing across states and often leading to poor performance. Our analysis reveals that this phenomenon is caused by the implicit regularization due to gradient descent on bootstrapped objectives. We observed that mitigating this problem by means of a simple regularization scheme improves performance of deep Q-learning methods. While our proposed regularization provides some improvement, devising better mitigation strategies for implicit under-parameterization remains an exciting direction for future work. Our method explicitly attempts to prevent rank collapse, but relies on the emergence of useful features solely through the bootstrapped signal. An alternative path may be to develop new auxiliary losses (e.g., Jaderberg et al., 2016) that learn useful features while passively preventing underparameterization. More broadly, understanding the effects of neural nets and associated factors such as initialization, choice of optimizer, etc. on the learning dynamics of deep RL algorithms, using tools from deep learning theory, is likely to be key towards developing robust and data-efficient deep RL algorithms. ACKNOWLEDGEMENTS We thank Lihong Li, Aaron Courville, Aurick Zhou, Abhishek Gupta, George Tucker, Ofir Nachum, Wesley Chung, Emmanuel Bengio, Zafarali Ahmed, and Jacob Buckman for feedback on an earlier version of this paper. We thank Hossein Mobahi for insightful discussions about self-distillation and Hanie Sedghi for insightful discussions about implicit regularization and generalization in deep networks. We additionally thank Michael Janner, Aaron Courville, Dale Schuurmans and Marc Bellemare for helpful discussions. AK was partly funded by the DARPA Assured Autonomy program, and DG was supported by a NSF graduate fellowship and compute support from Amazon. Appendices A ADDITIONAL EVIDENCE FOR IMPLICIT UNDER-PARAMETERIZATION In this section, we present additional evidence that demonstrates the existence of the implicit underparameterization phenomenon from Section 3. In all cases, we plot the values of srankδ(Φ) computed on a batch size of 2048 i.i.d. sampled transitions from the dataset. DQN (4x data) A.1 OFFLINE RL A.2 DATA EFFICIENT ONLINE RL In the data-efficient online RL setting, we verify the presence of implicit under-parameterization on both DQN and Rainbow (Hessel et al., 2018) algorithms when larger number of gradient updates are made per environment step. In these settings we find that more gradient updates per environment step lead to a larger decrease in effective rank, whereas effective rank can increase when the amount of data re-use is reduced by taking fewer gradient steps. A.3 DOES BOOTSTRAPPING CAUSE IMPLICIT UNDER-PARAMETERIZATION? In this section, we provide additional evidence to support our claim from Section 3 that suggests that bootstrapping-based updates are a key component behind the existence of implicit underparameterization. To do so, we empirically demonstrate the following points empirically: MC returns • For the final point in this section, we verify if the non-stationarity of the policy in the Qlearning (control) setting, i.e., when the Bellman optimality operator is used is not a reason behind the emergence of implicit under-parameterization. The non-stationary policy in a control setting causes the targets to change and, as a consequence, leads to non-zero errors. However, rank drop is primarily caused by bootstrapping rather than non-stationarity of the control objective. To illustrate this, we ran an experiment in the control setting on Gridworld, regressing to the target computed using the true value function Qπ for the current policy π (computed using tabular Q-evaluation) instead of using the bootstrap TD estimate. The results, shown in figure A.11a, show that the srankδ doesn’t decrease significantly when regressing to true control values and infact increases with more iterations as compared to Figure 6a where rank drops with bootstrapping. This experiment, alongside with experiments discussed above, ablating bootstrapping in the stationary policy evaluation setting shows that rank-deficiency is caused due to bootstrapping. A.4 HOW DOES IMPLICIT REGULARIZATION INHIBIT DATA-EFFICIENT RL? Implicit under-parameterization leads to a trade-off between minimizing the TD error vs. encouraging low rank features as shown in Figure 4b. This trade-off often results in decrease in effective rank, at the expense of increase in TD error, resulting in lower performance. Here we present additional evidence to support this. Figure A.11b shows a gridworld problem with one-hot features, which naturally leads to reduced state-aliasing. In this setting, we find that the amount of rank drop with respect to the supervised projection of oracle computed Q∗ values is quite small and the regression error to Q∗ actually decreases unlike the case in Figure 4a, where it remains same or even increases. The method is able to learn policies that attain good performance as well. Hence, this justifies that when there’s very little rank drop, for example, 5 rank units in the example on the right, FQI methods are generally able to learn Φ that is able to fit Q∗. This provides evidence showing that typical Q-networks learn Φ that can fit the optimal Q-function when rank collapse does not occur. In Atari, we do not have access to Q∗, and so we instead measure the error in fitting the target value estimates, R + γPπQk. As rank decreases, the TD error increases (Figure A.12) and the value function is unable to fit the target values, culminating in a performance plateau (Figure A.6). A.5 TRENDS IN VALUES OF EFFECTIVE RANK WITH PENALTY. In this section, we present the trend in the values of the effective rank when the penalty Lp(Φ) is added. In each plot below, we present the value of srankδ(Φ) with and without penalty respectively. A.5.1 OFFLINE RL: DQN A.5.2 OFFLINE RL: CQL WITH Lp(Φ) PENALTY A.6 DATA-EFFICIENT ONLINE RL: RAINBOW A.6.1 RAINBOW WITH Lp(Φ) PENALTY: RANK PLOTS A.6.2 RAINBOW WITH Lp(Φ) PENALTY: PERFORMANCE In this section, we present additional results for supporting the hypothesis that preventing rank-collapse leads to better performance. In the first set of experiments, we apply the proposed Lp penalty to Rainbow in the data-efficient online RL setting (n = 4). In the second set of experiments, we present evidence for prevention of rank collapse by comparing rank values for different runs. As we will show in the next section, the state-of-the-art Rainbow (Hessel et al., 2018) algorithm also suffers form rank collapse in the data-efficient online RL setting when more updates are performed per gradient step. In this section, we applied our penalty Lp to Rainbow with n = 4, and obtained a median 20.66% improvement on top of the base method. This result is summarized below. A.7 RELAXING THE NORMALITY ASSUMPTION IN THEOREM 4.1 We can relax the normality assumption on S in Theorem 4.1. An analogous statement holds for non-normal matrices S for a slightly different notion of effective rank, denoted as srankδ,λ(Mk), that utilizes eigenvalue norms instead of singular values. Formally, let λ1(Mk), · · · , λ2(Mk), · · · be the (complex) eigenvalues of Mk arranged in decreasing order of their norms, i.e., , |λ1(Mk)| ≥ |λ2(Mk)| ≥ · · · , then, srankδ,λ(Mk) = min { k : ∑k i=1 |λi(Mk)|∑d i=1 |λi(Mk)| ≥ 1− δ } . A statement essentially analogous to Theorem 4.1 suggests that in this general case, srankδ,λ(Mk) decreases for all (complex) diagonalizable matrices S, which is the set of almost all matrices of size dim(S). Now, if S is approximately normal, i.e., when |σi(S)− |λi(S)|| is small, then the result in Theorem 4.1 also holds approximately as we discuss at the end of Appendix C. We now provide empirical evidence showing that the trend in the values of effective rank computed using singular values, srankδ(Φ) is almost identical to the trend in the effective rank computed using normalized eigenvalues, srankδ,λ(Φ). Since eigenvalues are only defined for a square matrix Φ, in practice, we use a batch of d = dim(φ(s,a)) state-action pairs for computing the eigenvalue rank and compare to the corresponding singular value rank in Figures A.20 and A.21. Connection to Theorem 4.1. We computed the effective rank of Φ instead of S, since S is a theoretical abstraction that cannot be computed in practice as it depends on the Green’s kernel (Duffy, 2015) obtained by assuming that the neural network behaves as a kernel regressor. Instead, we compare the different notions of ranks of Φ since Φ is the practical counterpart for the matrix, S, when using neural networks (as also indicated by the analysis in Section 4.2). In fact, on the gridworld (Figure A.21), we experiment with a feature Φ with dimension equal to the number of state-action pairs, i.e., dim(φ(s,a)) = |S||A|, with the same number of parameters as a kernel parameterization of the Q-function: Q(s,a) = ∑ s′,a′ w(s ′,a′)k(s,a, s′,a′). This can also be considered as performing gradient descent on a “wide” linear network , and we measure the feature rank while observing similar rank trends. Since we do not require the assumption that S is normal in Theorem 4.1 to obtain a decreasing trend in srankδ,λ(Φ), and we find that in practical scenarios (Figures A.20 and A.21), srankδ(Φ) ≈ srankδ,λ(Φ) with an extremely similar qualitative trend we believe that Theorem 4.1 still explains the rank-collapse practically observed in deep Q-learning and is not vacuous. A.8 NORMALIZED PLOTS FOR FIGURE 3/ FIGURE A.6 In this section, we provide a set of normalized srank and performance trends for Atari games (the corresponding unnormalized plots are found in Figure A.6). In these plots, each unit on the x-axis is equivalent to one gradient update, and so since n = 8 prescribes 8× many updates as compared to n = 1, it it runs for 8× as long as n = 1. These plots are in Figure A.22. Note that the trend that effective rank decreases with larger n values also persists when rescaling the x-axis to account for the number of gradient steps, in all but one game. This is expected since it tells us that performing bootstrapping based updates in the data-efficient setting (larger n values) still leads to more aggressive rank drop as updates are being performed on a relatively more static dataset for larger values of n. B HYPERPARAMETERS & EXPERIMENT DETAILS B.1 ATARI EXPERIMENTS We follow the experiment protocol from Agarwal et al. (2020) for all our experiments including hyperparameters and agent architectures provided in Dopamine and report them for completeness and ease of reproducibility in Table B.1. We only use hyperparameter selection over the regularization experiment αp based on results from 5 Atari games (Asterix, Seaquest, Pong, Breakout and Seaquest). We will also open source our code to further aid in reproducing our results. Evaluation Protocol. Following Agarwal et al. (2020), the Atari environments used in our experiments are stochastic due to sticky actions, i.e., there is 25% chance at every time step that the environment will execute the agent’s previous action again, instead of the agent’s new action. All agents (online or offline) are compared using the best evaluation score (averaged over 5 runs) achieved during training where the evaluation is done online every training iteration using a -greedy policy with = 0.001. We report offline training results with same hyperparameters over 5 random seeds of the DQN replay data collection, game simulator and network initialization. Offline Dataset. As suggested by Agarwal et al. (2020), we randomly subsample the DQN Replay dataset containing 50 millions transitions to create smaller offline datasets with the same data distribution as the original dataset. We use the 5% DQN replay dataset for most of our experiments. We also report results using the 20% dataset setting (4x larger) to show that our claims are also valid even when we have higher coverage over the state space. Optimizer related hyperparameters. For existing off-policy agents, step size and optimizer were taken as published. We used the DQN (Adam) algorithm for all our experiments, given its superior performance over the DQN (Nature) which uses RMSProp, as reported by Agarwal et al. (2020). Atari 2600 games used. For all our experiments in Section 3, we used the same set of 5 games as utilized by Agarwal et al. (2020); Bellemare et al. (2017) to present analytical results. For our empirical evaluation in Appendix A.5, we use the set of games employed by Fedus et al. (2020b) which are deemed suitable for offline RL by Gulcehre et al. (2020). Similar in spirit to Gulcehre et al. (2020), we use the set of 5 games used for analysis for hyperparameter tuning for offline RL methods. 5 games subset: ASTERIX, QBERT, PONG, SEAQUEST, BREAKOUT 16 game subset: In addition to 5 games above, the following 11 games: DOUBLE DUNK, JAMES BOND, MS. PACMAN, SPACE INVADERS, ZAXXON, WIZARD OF WOR, YARS’ REVENGE, ENDURO, ROAD RUNNER, BEAMRIDER, DEMON ATTACK B.2 GRIDWORLD EXPERIMENTS We use the gridworld suite from Fu et al. (2019) to obtain gridworlds for our experiments. All of our gridworld results are computed using the 16 × 16 GRID16SMOOTHOBS environment, which consists of a 256-cell grid, with walls arising randomly with a probability of 0.2. Each state allows 5 different actions (subject to hitting the boundary of the grid): move left, move right, move up, move down and no op. The goal in this environment is to minimize the cumulative discounted distance to a fixed goal location where the discount factor is given by γ = 0.95. The features for this Q-function are given by randomly chosen vectors which are smoothened spatially in a local neighborhood of a grid cell (x, y). We use a deep Q-network with two hidden layers of size (64, 64), and train it using soft Q-learning with entropy coefficient of 0.1, following the code provided by authors of Fu et al. (2019). We use a first-in-first out replay buffer of size 10000 to store past transitions. C PROOFS FOR SECTION 4.1 In this section, we provide the technical proofs from Section 4.1. We first derive a solution to optimization problem Equation 1 and show that it indeed comes out to have the form described in Equation 2. We first introduce some notation, including definition of the kernel G which was used for this proof. This proof closely follows the proof from Mobahi et al. (2020). Definitions. For any universal kernel u, the Green’s function (Duffy, 2015) of the linear kernel operator L given by: [LQ] (s,a) := ∑ (s′,a′) u((s,a), (s ′,a′))Q(s′,a′) is given by the function g((s,a), (s′,a′)) that satisfies:∑ (s,a) u((s,a), (s′,a′)) g((s′,a′), (s̄, ā)) = δ((s,a)− (s̄, ā)), (C.1) where δ is the Dirac-delta function. Thus, Green’s function can be understood as a kernel that “inverts” the universal kernel u to the identity (Dirac-delta) matrix. We can then define the matrix G as the matrix of vectors g(s,a) evaluated on the training dataset, D, however note that the functional g(s,a) can be evaluated for other state-action tuples, not present in D. G((si,ai), (sj ,aj)) := g((si,ai), (sj ,aj)) and g(s,a)[i] = g((s,a), (si,ai)) ∀(si,ai) ∈ D. (C.2) Lemma C.0.1. The solution to Equation 1 is given by Equation 2. Proof. This proof closely follows the proof of Proposition 1 from (Mobahi et al., 2020). We revisit key aspects the key parts of this proof here. We restate the optimization problem below, and solve for the optimum Qk to this equation by applying the functional derivative principle. min Q∈Q J(Q) := ∑ si,ai∈D (Q(si,ai)− yk(si,ai))2 + c ∑ (s,a) ∑ (s′,a′) u((s,a), (s′,a′))Q(s,a)Q(s′,a′). The functional derivative principle would say that the optimal Qk to this problem would satisfy, for any other function f and for a small enough ε→ 0, ∀f ∈ Q : ∂J(Qk + εf) ∂ε ∣∣∣ ε=0 = 0. (C.3) By setting the gradient of the above expression to 0, we obtain the following stationarity conditions on Qk (also denoting (si,ai) := xi) for brevity:∑ xi∈D δ(x− xi) (Qk(xi)− yk(xi)) + c ∑ x u(x,x′)Qk(x ′) = 0. (C.4) Now, we invoke the definition of the Green’s function discussed above and utilize the fact that the Dirac-delta function can be expressed in terms of the Green’s function, we obtain a simplified version of the above relation:∑ x u(x,x′) ∑ xi∈D (Qk(xi)− yk(xi))g(x′,xi) = −c ∑ x u(x,x′)Qk(x ′). (C.5) Since the kernel u(x,x′) is universal and positive definite, the optimal solution Qk(x) is given by: Qk(s,a) = − 1 c ∑ (si,ai)∈D (Qk(si,ai)− yk(si,ai)) · g((s,a), (si,ai)). (C.6) Finally we can replace the expression for residual error, Qk(si,ai) − yk(si,ai) using the green’s kernel on the training data by solving for it in closed form, which gives us the solution in Equation 2. Qk(s,a) = − 1 c gT(s,a)(Qk − yk) = g T (s,a)(cI + G) −1yk. (C.7) Next, we now state and prove a slightly stronger version of Theorem 4.1 that immediately implies the original theorem. Theorem C.1. Let S be a shorthand for S = γPπA and assume S is a normal matrix. Then there exists an infinite, strictly increasing sequence of fitting iterations, (kl)∞l=1 starting from k1 = 0, such that, for any two singular-values σi(S) and σj(S) of S with σi(S) ≤ σj(S), ∀ l ∈ N and l′ ≥ l, σi(Mkl′ ) σj(Mkl′ ) < σi(Mkl) σj(Mkl) ≤ σi(S) σj(S) . (C.8) Therefore, the effective rank of Mk satisfies: srankδ(Mkl′ ) ≤ srankδ(Mkl). Furthermore, ∀ l ∈ N and t ≥ kl, σi(Mt) σj(Mt) < σi(Mkl) σj(Mkl) +O (( σi(S) σj(S) )kl) . (C.9) Therefore, the effective rank of Mt, srankδ(Mt), outside the chosen subsequence is also controlled above by the effective rank on the subsequence (srankδ(Mkl)) ∞ l=1. To prove this theorem, we first show that for any two fitting iterations, t < t′, if St and St ′ are positive semi-definite, the ratio of singular values and the effective rank decreases from t to t′. As an immediate consequence, this shows that when S is positive semi-definite, the effective rank decreases at every iteration, i.e., by setting kl = l (Corollary C.1.1). To extend the proof to arbitrary normal matrices, we show that for any S, a sequence of fitting iterations (kl)∞l=1 can be chosen such that S kl is (approximately) positive semi-definite. For this subsequence of fitting iterations, the ratio of singular values and effective rank also decreases. Finally, to control the ratio and effective rank on fitting iterations t outside this subsequence, we construct an upper bound on the ratio f(t): σi(Mt)σj(Mt) < f(t), and relate this bound to the ratio of singular values on the chosen subsequence. Lemma C.1.1 (srankδ(Mk) decreases when Sk is PSD.). Let S be a shorthand for S = γPπA and assume S is a normal matrix. Choose any t, t′ ∈ N such that t < t′. If St and St′ are positive semi-definite, then for any two singular-values σi(S) and σj(S) of S, such that 0 < σi(S) < σj(S): σi(Mt′) σj(Mt′) < σi(Mt) σj(Mt) ≤ σi(S) σj(S) . (C.10) Hence, the effective rank of Mk decreases from t to t′: srankδ(Mt′) ≤ srankδ(Mt). Proof. First note that Mk is given by: Mk := k∑ i=1 γk−i(PπA)k−i = k∑ i=1 Sk−i. (C.11) From hereon, we omit the leading γ term since it is a constant scaling factor that does not affect ratio or effective rank. Almost every matrix S admits a complex orthogonal eigendecomposition. Thus, we can write S := Uλ(S)UH . And any power of S, i.e., , Si can be expressed as: Si = Uλ(S)iUH , and hence, we can express Mk as: Mk := U ( k−1∑ i=0 λ(S)i ) UH = U · diag ( 1− λ(S)k 1− λ(S) ) · UH . (C.12) Since S is normal, its eigenvalues and singular values are further related as σk(S) = |λk(S)|. And this also means that Mk is normal, indicating that σi(Mk) = |λi(Mk)|. Thus, the singular values of Mk can be expressed as σi(Mk) := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.13) When Sk is positive semi-definite, λi(S)k = σi(S)k, enabling the following simplification: σi(Mk) = |1− σi(S)k| |1− λi(S)| . (C.14) To show that the ratio of singular values decreases from t to t′, we need to show that f(σ) = |1−σ t′ | |1−σt| is an increasing function of σ when t′ > t. It can be seen that this is the case, which implies the desired result. To further show that srankδ(Mt) ≥ srankδ(Mt′), we can simply show that ∀i ∈ [1, · · · , n], hk(i) := ∑i j=1 σj(Mk)∑n j=1 σj(Mk) increases with k, and this would imply that the srankδ(Mk) cannot increase from k = t to k = t′. We can decompose hk(i) as: hk(i) = i∑ j=1 σj(Mk)∑ l σl(Mk) = 1 1 + ∑n j=i+1 σj(Mk)∑i j=1 σj(Mk) . (C.15) Since σj(Mk)/σl(Mi) decreases over time k for all j, l if σj(S) ≤ σl(S), the ratio in the denominator of hk(i) decreases with increasing k implying that hk(i) increases from t to t′. Corollary C.1.1 (srankδ(Mk) decreases for PSD S matrices.). Let S be a shorthand for S = γPπA. Assuming that S is positive semi-definite, for any k, t ∈ N, such that t > k and that for any two singular-values σi(S) and σj(S) of S, such that σi(S) < σj(S), σi(Mt) σj(Mt) < σi(Mk) σj(Mk) ≤ σi(S) σj(S) . (C.16) Hence, the effective rank of Mk decreases with more fitting iterations: srankδ(Mt) ≤ srankδ(Mk). In order to now extend the result to arbitrary normal matrices, we must construct a subsequence of fitting iterations (kl)∞l=1 where S kl is (approximately) positive semi-definite. To do so, we first prove a technical lemma that shows that rational numbers, i.e., numbers that can be expressed as r = pq , for integers p, q ∈ Z are “dense” in the space of real numbers. Lemma C.1.2 (Rational numbers are dense in the real space.). For any real number α, there exist infinitely many rational numbers pq such that α can be approximated by p q upto 1 q2 accuracy.∣∣∣∣α− pq ∣∣∣∣ ≤ 1q2 . (C.17) Proof. We first use Dirichlet’s approximation theorem (see Hlawka et al. (1991) for a proof of this result using a pigeonhole argument and extensions) to obtain that for any real numbers α andN ≥ 1, there exist integers p and q such that 1 ≤ q ≤ N and, |qα− p| ≤ 1 |N |+ 1 < 1 N . (C.18) Now, since q ≥ 1 > 0, we can divide both sides by q, to obtain:∣∣∣∣α− pq ∣∣∣∣ ≤ 1Nq ≤ 1q2 . (C.19) To obtain infinitely many choices for pq , we observe that Dirichlet’s lemma is valid only for all values of N that satisfy N ≤ 1|qα−p| . Thus if we choose an N ′ such that N ′ ≥ Nmax where Nmax is defined as: Nmax = max { 1 |q′α− p′| ∣∣∣ p′, q′ ∈ Z, 1 ≤ q′ ≤ q} . (C.20) Equation C.20 essentially finds a new value of N , such that the current choices of p and q, which were valid for the first value ofN do not satisfy the approximation error bound. Applying Dirichlet’s lemma to this new value of N ′ hence gives us a new set of p′ and q′ which satisfy the 1q′2 approximation error bound. Repeating this process gives us countably many choices of (p, q) pairs that satisfy the approximation error bound. As a result, rational numbers are dense in the space of real numbers, since for any arbitrarily chosen approximation accuracy given by 1q2 , we can obtain atleast one rational number, pq which is closer to α than 1 q2 . This proof is based on Johnson (2016). Now we utilize Lemmas C.1.1 and C.1.2 to prove Proposition 4.1. Proof of Proposition 4.1 and Theorem C.1 Recall from the proof of Lemma C.1.1 that the singular values of Mk are given by: σi(Mk) := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.21) Bound on Singular Value Ratio: The ratio between σi(Mk) and σj(Mk) can be expressed as σi(Mk) σj(Mk) = ∣∣∣∣ 1− λi(S)k1− λj(S)k ∣∣∣∣ ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ . (C.22) For a normal matrix S, σi(S) = |λi(S)|, so this ratio can be bounded above as σi(Mk) σj(Mk) ≤ 1 + σi(S) k |1− σj(S)k| ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ . (C.23) Defining f(k) to be the right hand side of the equation, we can verify that f is a monotonically decreasing function in k when σi < σj . This shows that this ratio of singular values in bounded above and in general, must decrease towards some limit limk→∞ f(k). Construction of Subsequence: We now show that there exists a subsequence (kl)∞l=1 for which Skl is approximately positive semi-definite. For ease of notation, let’s represent the i-th eigenvalue as λi(S) = |λi(S)| · eiθi , where θi > 0 is the polar angle of the complex value λi(s) and |λi(S)| is its magnitude (norm). Now, using Lemma C.1.2, we can approximate any polar angle, θi using a rational approximation, i.e., , we apply lemma C.1.2 on θi2π ∃ pi, qi ∈ N, s.t. ∣∣∣∣ θi2π − piqi ∣∣∣∣ ≤ 1q2i . (C.24) Since the choice of qi is within our control we can estimate θi for all eigenvalues λi(S) to infinitesimal accuracy. Hence, we can approximate θi ≈ 2π piqi . We will now use this approximation to construct an infinite sequence (kl)∞l=1, shown below: kl = l · LCM(q1, · · · , qn) ∀ j ∈ N, (C.25) where LCM is the least-common-multiple of natural numbers q1, · · · qn. In the absence of any approximation error in θi, we note that for any i and for any l ∈ N as defined above, λi(S)kl = |λi(S)|kl · exp ( 2iπ · piqi · kl ) = |λi(S)|kl , since the polar angle for any kl is going to be a multiple of 2π, and hence it would fall on the real line. As a result, Skl will be positive semi-definite, since all eigenvalues are positive and real. Now by using the proof for Lemma C.1.1, we obtain the ratio of i and j singular values are increasing over the sequence of iterations (kj)∞j=1. Since the approximation error in θi can be controlled to be infinitesimally small to prevent any increase in the value of srankδ due to it (this can be done given the discrete form of srankδ), we note that the above argument applies even with the approximation, proving the required result on the subsequence. Controlling All Fitting Iterations using Subsequence: We now relate the ratio of singular values within this chosen subsequence to the ratio of singular values elsewhere. Choose t, l ∈ N such that t > kl. Earlier in this proof, we showed that the ratio between singular values is bounded above by a monotonically decreasing function f(t), so σi(Mt) σj(Mt) ≤ f(t) < f(kl). (C.26) Now, we show that that f(kl) is in fact very close to the ratio of singular values: f(kl) = |1− σi(S)kl | |1− σj(S)kl | ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣ ≤ σi(Mt)σj(Mt) + 2σi(S) kl |1− σj(S)kl | ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣. (C.27) The second term goes to zero as kl increases; algebraic manipulation shows that this gap be bounded by f(kl) ≤ σi(Mkl) σj(Mkl) + ( σi(S) σj(S) )kl 2σj(S) |1− σj(S)| ∣∣∣∣1− λj(S)1− λi(S) ∣∣∣∣︸ ︷︷ ︸ constant . (C.28) Putting these inequalities together proves the final statement, σi(Mt) σj(Mt) ≤ σi(Mkl) σj(Mkl) +O (( σi(S) σj(S) )kl) . (C.29) Extension to approximately-normal S. We can extend the result in Theorem C.1 (and hence also Theorem 4.1) to approximately-normal S. Note that the main requirement for normality of S (i.e., σi(S) = |λi(s)|) is because it is straightforward to relate the eigenvalue of S to M as shown below. |λi(Mk)| := ∣∣∣∣1− λi(S)k1− λi(S) ∣∣∣∣ , (C.30) Now, since the matrix S is approximately normal, we can express it using its Schur’s triangular form as, S = U · (Λ + N) ·UH , where Λ is a diagonal matrix and N is an “offset” matrix. The departure from normality of S is defined as: ∆(S) := infN ||N||2, where the infimum is computed over all matrices N that can appear in the Schur triangular form for S. For a normal S only a single value of N = 0 satisfies the Schur’s triangular form. For an approximately normal matrix S, ||N||2 ≤ ∆(S) ≤ ε, for a small ε. Furthermore note that from Equation 6 in Ruhe (1975), we obtain that |σi(S)− |λi(S)|| ≤ ∆(S) ≤ ε, (C.31) implying that singular values and norm-eigenvalues are close to each other for S. Next, let us evaluate the departure from normality of Mk. First note that, Sj = U · (Λ +N)j ·UH , and so, Mk = U · (∑k j=1(Λ + N) j ) ·UH and if ||N||2 ≤ ε, for a small epsilon (i.e., considering only terms that are linear in N for (Λ + N)j), we note that: |σi(Mk)− |λi(Mk)|| ≤ k∑ j=1 j · |λ1(S)|j−1∆(S) ≤ 1 (1− |λ1(S)|)2 · ε. (C.32) Thus, the matrix Mk is also approximately normal provided that the max eigenvalue norm of S is less than 1. This is true, since S = γPπA (see Theorem 4.1, where both Pπ and A have eigenvalues less than 1, and γ < 1. Given that we have shown that Mk is approximately normal, we can show that srankδ(Mk) only differs from srankδ,λ(Mk), i.e., , the effective rank of eigenvalues, in a bounded amount. If the value of ε is then small enough, we still retain the conclusion that srankδ(Mk) generally decreases with more training by following the proof of Theorem C.1. D PROOFS FOR SECTION 4.2 In this section, we provide technical proofs from Section 4.2. We start by deriving properties of optimization trajectories of the weight matrices of the deep linear network similar to Arora et al. (2018) but customized to our set of assumptions, then prove Proposition 4.1, and finally discuss how to extend these results to the fitted Q-iteration setting and some extensions not discussed in the main paper. Similar to Section 4.1, we assume access to a dataset of transitions, D = {(si,ai, r(si,ai), s′i} in this section, and assume that the same data is used to re-train the function. Notation and Definitions. The Q-function is represented using a deep linear network with at least 3 layers, such that Q(s,a) = WNWN−1 · · ·W1[s;a], where N ≥ 3,WN ∈ R1×dN−1 , (D.1) and Wi ∈ Rdi×di−1 for i = 1, . . . , N − 1. We index the weight matrices by a tuple (k, t): Wj(k, t) denotes the weight matrix Wj at the t-th step of gradient descent during the k-th fitting iteration (Algorithm 1). Let the end-to-end weight matrix WNWN−1 · · ·W1 be denoted shorthand as WN :1, and let the features of the penultimate layer of the network, be denoted as Wφ(k, t) := WN−1(k, t) · · ·W1(k, t). Wφ(k, t) is the matrix that maps an input [s;a] to corresponding features Φ(s,a). In our analysis, it is sufficient to consider the effective rank of Wφ(k, t) since the features Φ are given by: Φ(k, t) = Wφ(k, t)[S;A], which indicates that: rank(Φ(k, t)) = rank(Wφ(k, t)[S;A]) ≤ min (rank(Wφ(k, t)), rank([S;A])) . Assuming the state-action space has full rank, we are only concerned about rank(Wφ(k, t)) which justifies our choice for analyzing srankδ(Wφ(k, t)). Let Lk+1(WN :1(k, t)) denote the mean squared Bellman error optimization objective in the k-th fitting iteration. Lk+1(WN :1(k, t)) = |D|∑ i=1 (WN (k, t)Wφ(k, t)[si;ai]− yk(si,ai))2 , where yk = R + γPπQk. When gradient descent is used to update the weight matrix, the updates to Wi(k, t) are given by: Wj(k, t+ 1)←Wj(k, t)− η ∂Lk+1(WN :1(k, t)) ∂Wj(k, t) . If the learning rate η is small, we can approximate this discrete time process with a continuous-time differential equation, which we will use for our analysis. We use Ẇ (k, t) to denote the derivative of W (k, t) with respect to t, for a given k. Ẇj(k, t) = −η ∂Lk+1(WN :1(k, t)) ∂Wj(k, t) (D.2) In order to quantify the evolution of singular values of the weight matrix, Wφ(k, t), we start by quantifying the evolution of the weight matrix Wφ(k, t) using a more interpretable differential equation. In order to do so, we make an assumption similar to but not identical as Arora et al. (
1. What is the focus of the paper regarding deep RL methods? 2. What are the strengths of the proposed approach, particularly in terms of its theoretical analyses? 3. Do you have any concerns about a section of the paper, such as Section 4.1? 4. How do the experimental results align with the theoretical analysis? 5. Are there any suggestions for additional experiments or improvements to the paper?
Review
Review ###Summary: This paper identifies a type of implicit under-parameterization phenomenon in deep RL methods that use bootstrapping. It is found that after an initial learning period, the effective rank of the feature matrix keeps decreasing. This implies that the representational power of the network is not fully utilized. The authors call it a type of implicit under-parameterization. Moreover, the emergence of this under-parameterization strongly correlates with the poor performance. Some preliminary theoretical analyses are provided to explain this phenomenon. ###Pros: The paper is well written, and I can follow the idea very smoothly. The implicit under-parameterization phenomenon seems very intriguing and useful for designing better bootstrapping-based deep RL methods. The theoretical analyses are very illustrative though still preliminary. ###Cons: The analysis in Section 4.1 seems not correct to me. For kernel regression, the implicit bias of gradient descent with an infinite number of iterations is to pick up the solutions with the smallest RKHS norm. This implies c = 0 in Eq. (1), which would make the subsequent analysis problematic. However, if early stopping is applied, the GD solutions will be equal to the one given in Eq. (1) with c > 0 . The value of c depends on how the early stopping is applied. Please refer to [1] for more details. ###Other comments: The analysis in Section 4.1 is very illustrative due to the use of the kernel model. But all the experiments are done for neural networks. It would be much helpful if some extra experiments with kernel models can be added, for which we can directly compare the experimental results and the theoretical analysis. [1] Suggala, Arun, Adarsh Prasad, and Pradeep K. Ravikumar. "Connecting optimization and regularization paths." Advances in Neural Information Processing Systems. 2018. Post-rebuttal Comments I thank the authors for the response and the efforts to update the drafts. I think this submission made some original contributions.
ICLR
Title Learning Task-Relevant Features via Contrastive Input Morphing Abstract A fundamental challenge in artificial intelligence is learning useful representations of data that yield good performance on a downstream classification task, without overfitting to spurious input features. Extracting task-relevant predictive information becomes particularly challenging for high-dimensional, noisy, real-world data. We propose Contrastive Input Morphing (CIM), a representation learning framework that learns input-space transformations of the data to mitigate the effect of irrelevant input features on downstream performance via a triplet loss. Empirically, we demonstrate the efficacy of our approach on various tasks which typically suffer from the presence of spurious correlations, and show that CIM improves the performance of other representation learning methods such as variational information bottleneck (VIB) when used in conjunction. N/A A fundamental challenge in artificial intelligence is learning useful representations of data that yield good performance on a downstream classification task, without overfitting to spurious input features. Extracting task-relevant predictive information becomes particularly challenging for high-dimensional, noisy, real-world data. We propose Contrastive Input Morphing (CIM), a representation learning framework that learns input-space transformations of the data to mitigate the effect of irrelevant input features on downstream performance via a triplet loss. Empirically, we demonstrate the efficacy of our approach on various tasks which typically suffer from the presence of spurious correlations, and show that CIM improves the performance of other representation learning methods such as variational information bottleneck (VIB) when used in conjunction. 1 INTRODUCTION At the heart of modern machine learning is the problem of representation learning, or extracting features from raw data that enable predictions with high accuracy (Hinton & Salakhutdinov, 2006; Vincent et al., 2010; Chen et al., 2016; Van Den Oord et al., 2017; Oord et al., 2018). Despite the recent successes of deep neural networks (Dean et al., 2012; LeCun et al., 2015), their rapidly growing size and large-scale training procedures, coupled with high-dimensional data sources, pose significant challenges in learning models that perform well on a given task without overfitting to spurious input features (Zhang et al., 2016; Ilyas et al., 2019; Geirhos et al., 2020). As a result, trained networks have been shown to fail spectacularly on out-of-domain generalization tasks (Beery et al., 2018; Rosenfeld et al., 2018) and for rare subgroups present in data (Hashimoto et al., 2018; Goel et al., 2020), among others. A wide range of methods have been proposed to tackle this problem, including regularization, data augmentation, leveraging causal explanations, and self-training (Srivastava et al., 2014; Chen et al., 2020b; Sagawa et al., 2019; Chen et al., 2020b). In particular, prior art places a heavy emphasis on lossless access to the input data during training, then constructing a high-level representation which extracts the necessary information. Yet it is reasonable to assume that in some cases, we desire access to only a subset of the input which is relevant to the task – for example, the background color in an image of a “7” is unnecessary for identifying its digit class. The fundamental challenge, then, is discerning which parts of the input are relevant without requiring privileged information (e.g., the nature of the downstream task) at training time. Our approach, Contrastive Input Morphing (CIM), uses labeled supervision to learn input-space transformations of the data that mitigate the effect of irrelevant input features on predictive performance. Though the Data Processing Inequality (Cover, 1999) states that no amount of input processing can increase its mutual information (MI) with the predictive variable, we propose to transform the data in such a way that it makes it easier for the model to extract the relevant predictive information for the downstream task – that is, we attempt to increase the amount of usable information for our representations (Xu et al., 2020). We emphasize that our method does not assume access to the exact nature of the downstream task, such as attribute labels for rare subgroups. The key workhorse of CIM is an auxiliary network called the Transformation Network (TN). Leveraging ideas from neural style transfer (Gatys et al., 2015; Li et al., 2017b), the TN is trained via a triplet loss on feature correlation matrices (Schroff et al., 2015; Koch, 2015). Intuitively, this objective uses the shared information from competing classes (“negative examples”) as a proxy for spurious correlations, while leveraging the shared information within the same class (“positive examples”) as a heuristic for task-relevancy (Khosla et al., 2020). The framework for CIM is quite general: it is (1) complementary to MI-based representation learning techniques such as variational information bottleneck (VIB) (Alemi et al., 2016); and (2) can be used as a plug-in module for training any classifier. For the flowchart of the training procedure of the CIM refer to Figure 1. Empirically, we evaluate CIM on three settings that suffer from spurious correlations: classification with nuisance background information, out-of-domain (OOD) generalization, and improving accuracy uniformly across subgroups. In the first task, CIM outperforms ERM on colored MNIST and improves over the ResNet-50 baseline on the Background Challenge (Xiao et al., 2020). Similarly, CIM outperforms relevant baselines using ResNet-18 on the VLCS dataset (Torralba & Efros, 2011) for OOD generalization. For subgroup accuracies, CIM outperforms both supervised and unsupervised methods on CelebA (Liu et al., 2015) in terms of worst-group accuracy (by 1.7% and 41.4% respectively), while outperforming unsupervised methods by up to 12.9% on Waterbirds. In summary, our contributions in this work can be outlined as follows: 1. We propose CIM, a method demonstrating that lossy access to input data helps extract good task-relevant representations. 2. We show that CIM is complementary to existing methods, as the learned transformations can be leveraged by other MI-based representation learning techniques such as VIB. 3. We empirically verify the robustness of the learned representations to spurious correlations on a variety of tasks (Section 4). 2 PRELIMINARIES We consider the standard supervised learning setup where x ∈ X ⊆ Rd is the input variable, and y ∈ Y = {1, . . . , k} is the set of corresponding labels. We assume access to samples D = {(xi, yi)}ni=1 drawn from an underlying (unknown) joint distribution pdata(x, y), and use capital letters to denote random variables, e.g. X and Y . We use P (X,Y ) to denote their joint distribution as well as P (·) for the respective marginal (e.g. P (X) for the marginal distribution of X). 2.1 BACKGROUND AND PROBLEM SETUP Our goal is to learn a classifier fθ : X → Y , where fθ ∈ Θ achieves low error according to some loss function ` : Θ× (X × Y)→ R. Specifically, we minimize the empirical risk: Lsup(θ) = Ex,y∼pdata(x,y)[`(fθ(x), y)] ≈ n∑ i=1 `(fθ(xi), yi) (1) In addition to good classification performance, we aim to learn representations of the data, which: (a) are highly predictive of the downstream task; and (b) do not rely on spurious input features. That is, the learned representations should be task-relevant. Information bottleneck. A natural way to measure “task-relevance” in random variables is to consider the total amount of information that a compressed (stochastic) representation Z contains about the input variable X and the output variable Y . In particular, information bottleneck (IB) (Tishby et al., 2000; Chechik et al., 2005; Alemi et al., 2016) is a framework which utilizes mutual information (MI) to quantify this dependence via the following objective: min P (Z|X) I(X;Z)− βI(Z;Y ) (2) where β > 0 controls the importance of obtaining good performance on the downstream task. Given two random variables X and Y , I(X;Y ) is computed as DKL(P (X,Y )||P (X)P (Y )), where DKL denotes the Kullback-Leibler (KL) divergence between two probability distributions. The IB framework can be extended to account for additional sources of input data that is known to contain irrelevant information about the predictive task. This setting, known as IB with side information (Chechik & Tishby, 2003), adds a term in the IB objective, which simultaneously minimizes the MI between this nuisance variable and the learned representation. Concretely, given random variables (X,Y+, Y−) where Y+ denotes the task of interest and Y− denotes a spurious auxiliary variable, the objective becomes: min P (Z|X) I(X;Z)− β(I(Z;Y+)− γI(Z;Y−)) (3) where γ is another tunable hyperparameter for the nuisance task. We note that this framework bears resemblance to triplet-based losses such as (Schroff et al., 2015; Koch, 2015), as well as contrastive learning approaches that leverage MI maximization (Linsker, 1988; Hjelm et al., 2018; Oord et al., 2018; Tian et al., 2019; Khosla et al., 2020). It is also in line with the InfoMin principle suggested by (Tian et al., 2020), for learning good views in self-supervised contrastive learning. Although the MI framework is compelling, as it captures arbitrarily complex dependencies between random variables, there exist several challenges with their use in practice. The difficulty of computing MI in high dimensions, for example, is well-documented, demands the use of various neural estimators (Barber & Agakov, 2003; Gutmann & Hyvärinen, 2010; Oord et al., 2018; Belghazi et al., 2018; Poole et al., 2019; Song & Ermon, 2019). Additionally, approaches such as IB posit restrictive assumptions on the relationships between Y+ and Y−; namely, that they must be conditionally independent given X , which is difficult to enforce. 3 CONTRASTIVE INPUT MORPHING Motivated by the above challenges, we propose to approximate the information content between task-relevant and -irrelevant features via correlations in higher-dimensional feature spaces. This procedure helps our method learn the appropriate input-space transformations. 3.1 MEASURING RELEVANCE VIA CORRELATIONS Another way to measure “task-relevance” in random variables is to consider their conditional dependencies, as captured by their covariances. Specifically, consider a feature map φ : X → Rd that takes in an input and returns a representation φ(x) that is of the same dimensionality as the original input x. We can use this feature map to construct a covariance (Gram) matrix of φ(x), where ΣXX = φ(x)Tφ(x). Although the covariance only measures linear dependencies between the input, we can capture more complex relationships via an arbitrarily complex feature map φ. Training Procedure: For the Transformation Network (TN), we utilize a convolutional autoencoder to obtain a reconstructed image of the same dimensionality as the input, as shown in Figure 2. Our method operates over triplets: (x, x+, x−), where (x, x+) denote examples from the same class while x− is an example from a different class than x. We use a supervised contrastive loss to train the network, similar in spirit to (Khosla et al., 2020). Specifically, we learn an intermediate feature map φ : X → R(H×W )×C using the TN that takes in an input x and returns a representation φ(x) that is of the same dimensionality as the original input, where (H ×W ) denotes the height and width of the image, and C denotes the number of channels. We use this feature map to construct a Gram matrix of the input features, where ΣXX = φ(x)Tφ(x). Then, the triplet loss encourages the positive examples’ Gram matrix representations to move closer together in embedding space to those of the input, while ensuring that the negative examples’ representations are further apart: Lcon(φ) = min φ ||ΣXX ,ΣX+X+ ||2 −max(α, ||ΣXX ,ΣX−X− ||2) (4) for some margin α > 0. The output of φ(·) is then passed through a 1 × 1 2-D convolution layer with a sigmoid activation to produce a (single channel) soft “mask” m(x), which is then multiplied with the original input image x to obtain the final representation ψ(x) = x ◦ m(x). Finally, the classifier fθ(·)is trained on the transformed input image ψ(x). Learning Objective: The overall loss function can be written as: L(φ, θ) = λLcon(φ) + Lsup(θ) (5) where λ is a multiplier which controls the contribution of the TN loss from Lcon(φ) from Equation 4 and Lsup(θ) is the standard cross entropy loss for multi-class classification. The parameters for the transformation network (φ) and the classifier (θ) are trained jointly. In our experiments, we found that values of λ = 0.0001 worked well. It is well known that Lcon can be interpreted as minimizing a specific form of Maximum Mean Discrepancy (MMD) (Gatys et al., 2015; Li et al., 2017b). For the identity map φ(·), Equation 5 is equivalent to minimizing MMD between two kernelized inputs where the specific kernel is the second-order polynomial kernel. In this way, CIM’s Transformation Network can also be seen as minimizing the distance between the mean embeddings of the underlying distributions for X and X+ while simultaneously maximizing the distance for those of X and X−. 3.2 A MOTIVATING EXAMPLE We present a concrete example for the intuition behind our approach using the MNIST dataset (LeCun, 1998). We first construct a challenging input reconstruction task, in which a red square is placed on the bottom right of all samples and the model is trained to reconstruct a random digit that is different from the input’s class. As the TN is trained as an independent autoencoding module, we find that the TN learns to pick up shared signals across inputs (i.e., the black background) before converging to the red square as the source of shared (spurious) correlations among examples. Next, we evaluate whether we can remove this source of variation for digit classification by passing a lossy version of the input into the classifier (Figure 1). As shown in Figure 2 (bottom), the input transformation learned by the model (i.e. ψ(x) = m ◦ x) de-emphasizes the shared features while highlighting the task-relevant features. 4 EXPERIMENTAL RESULTS For our experiments, we are interested in empirically investigating the following questions: 1. Are CIM’s learned representations robust to spurious correlations in the input features? 2. Does the input transformation learned by CIM improve domain generalization? 3. How well can CIM preserve classification accuracy across subgroups? Datasets: We consider various datasets to test the effectiveness of our method. We first construct a colored variant of MNIST to demonstrate that CIM successfully ignores nuisance background information in a digit classification task, then further explore this finding on the Background Challenge (Xiao et al., 2020). Next, we evaluate CIM on the VLCS dataset (Torralba & Efros, 2011) to demonstrate that the input transformations help in learning representations that generalize to outof-domain distributions. Then, we study two benchmark datasets, CelebA (Liu et al., 2015) and Waterbirds (Wah et al., 2011; Zhou et al., 2017; Sagawa et al., 2019), to show that CIM preserves subgroup accuracies. Models: We use different classifier architectures depending on the downstream task. While ResNet50 is the default choice for most datasets, we also utilize Inception-ResNetV2 (Szegedy et al., 2016) to obtain better performance, ResNet-18 for a fair comparison with existing OOD generalization techniques, and PointNet (Qi et al., 2017) for 3D point cloud classification. We also experiment with Variational Information Bottleneck (VIB) (Alemi et al., 2016) as both a complementary and competing approach to CIM, and use ResNet-50 as the VIB encoder. We refer the reader to Appendix A.2 for additional details on model architectures and hyperparameters. We note that the transformed inputs and the feature maps Φ are semantically meaningful as shown in Figure 4. 4.1 CLASSIFICATION WITH NUISANCE BACKGROUND INFORMATION Colored MNIST: First, we assess whether CIM can distinguish between two MNIST digit classes (2 and 7) in the presence of a spurious input feature (background color). As outlined in Figure 3(a), we construct a dataset such that a classifier will achieve low accuracy by relying on background color. For a given proportion α, we color α% of all digits labeled “2” in the training set with blue backgrounds, and color the remaining (1−α)% labeled “7” with yellow backgrounds. We vary this proportion by α = {0.5%, 1%, 2%}. At test time, we color all the digits labeled “2” in blue, while coloring the “7” digits in yellow. As shown in Figure 3 (b), CIM is better able to utilize relevant information for the downstream classification task in comparison to ERM by 13%, 10.5%, and 3% on models trained with α = {0.5%, 1%, 2%} respectively. Perhaps more interestingly, a hybrid approach of VIB + CIM outperforms all other methods – this suggests that the input transformations learned by CIM are indeed preserving task-relevant information that can be better leveraged by InfoMax methods such as VIB. More experimental details can be found in Appendix A.2. The Background Challenge: Next, we evaluate whether the favorable results from MNIST translate to a more challenging setup, and test CIM on the Background Challenge (Xiao et al., 2020). The Background Challenge is a public dataset consisting of ImageNet-9 (Deng et al., 2009) test sets with varying amounts of foreground and background signals, designed to measure the extent to which deep classifiers rely on spurious features for image classification. As shown in Table 1, CIM outperforms the original ResNet-50’s performance by 4.1% on Mixed-rand, 0.8% on Mixed-same, and 0.5% on the original test set. Mixed-rand refers to the setting where the foreground is overlaid onto a random background, while Mixed-same corresponds to the test set where the foreground is placed on a background from the same class. These results demonstrate that CIM indeed learns task-relevant representations without relying on nuisance background information. 4.2 CIM GENERALIZES OVER DIFFERENT DOMAINS In this experiment, we evaluate CIM on OOD generalization performance using the VLCS benchmark (Torralba & Efros, 2011). VLCS consists of images from five object categories shared by the PASCAL VOC 2007, LabelMe, Caltech, and Sun datasets, which are considered to be four separate domains. We follow the standard evaluation strategy used in (Carlucci et al., 2019), where we partition each domain into a train (70%) and test set (30%) by random selection from the overall dataset. As summarized in Table 2, CIM outperforms state-of-the-art methods based on ResNet-18 on each domain, bolstering our claim that using a lossy transformation of the input is helpful for learning task-relevant representations that generalize across domains. 4.3 CIM PRESERVES SUBGROUP PERFORMANCE In this experiment, we investigate whether representations learned by CIM perform well on all subgroups on the CelebA and Waterbirds datasets. Preserving good subgroup-level accuracy is challenging for naive ERM-based methods, given their tendency to latch onto spurious correlations (Kim et al., 2019; Arjovsky et al., 2019; Sagawa et al., 2020; Chen et al., 2020b). Most prior works leverage privileged information such as group labels to mitigate this effect (Ben-Tal et al., 2013; Vapnik & Izmailov, 2015; Sagawa et al., 2019; Goel et al., 2020; Xiao et al., 2020). As TN in CIM is trained to capture task-relevant features and minimize nuisance correlations between classes, we hypothesize that CIM should perform well at the subgroup level even without explicit group label information. For a fair comparison with the prior work, we use ResNet-50 as the backbone classifier for the CIM , but also train both ERM and CIM with an Inception-ResNetV2 (Szegedy et al., 2016) backbone to assess the impact of using a larger model (denoted by ERM* and CIM*, respectively). We also use ResNet-50 for VIB’s encoder and InfoMask’s discriminator (see Appendix A.2). Table 3 shows that CIM outperforms both supervised and unsupervised methods on CelebA in terms of worst-group accuracy (2.4% improvement over CAMEL, the top-performing supervised model), and outperforms unsupervised models while significantly improving over ERM on the Waterbirds dataset (16.7% increase). We emphasize that the favorable performance of CIM is obtained without using subgroup labels, in contrast with previous approaches. We refer the reader to Appendix B.3 for further details and ablation studies regarding the different components of our method. 5 RELATED WORK Our work bridges several lines of work in contrastive learning and learning representations that are robust to spurious correlations. Contrastive representation learning. There has been a flurry of recent work in contrastive methods for representation learning, which encourages an encoder network to map “positive” examples closer together in a latent embedding space while spreading the “negative” examples further apart (Oord et al., 2018; Hjelm et al., 2018; Wu et al., 2018; Tian et al., 2019; Arora et al., 2019; Chen et al., 2020a). Included are triplet-based losses (Schroff et al., 2015; Koch, 2015) and noise contrastive estimation losses (Gutmann & Hyvärinen, 2010). In particular, recent work (Tian et al., 2020; Wu et al., 2020) has shown that minimizing MI between views while maximizing predictive information of the representations with respect to the downstream task, leads to performance improvements, similar to IB (Chechik & Tishby, 2003). While most contrastive approaches are selfsupervised, (Khosla et al., 2020) utilizes class labels as part of their learning procedure, similar to our approach. We emphasize that CIM is not meant to be directly comparable to the aforementioned techniques, as our objective is to learn input transformations of the data that are task-relevant. Robustness of representations Several works have considered the problem of learning relevant features that do not rely on spurious correlations with the predictive task (Heinze-Deml & Meinshausen, 2017; Sagawa et al., 2020; Chen et al., 2020b). Though (Wang et al., 2019) is similar in spirit to CIM, they utilize gray-level co-occurrence matrices as the spurious (textural) information of the input images, then regress out this information from the trained classifier’s output layer. Our method does not solely rely on textural features and can learn any transformation of the input space that is relevant for the downstream task of interest. Although CIM also bears resemblance to InfoMask (Taghanaki et al., 2019), our method is not limited to attention maps. (Kim et al., 2019) uses an MI-based objective to minimize the effect of spurious features, while (Pensia et al., 2020) additionally incorporates regularization via Fisher information to enforce robustness of the features. On the other hand, CIM uses an orthogonal approach to learn robust representations via higher-order correlations in the features. Information in representations There is a rich body of work which focuses on quantifying the amount of information necessary to perform well on a downstream task (Achille & Soatto, 2018). CIM is reminiscent of InfoMax (Linsker, 1988) and IB-based approaches (Tishby et al., 2000; Alemi et al., 2016) which propose to maximize the MI in the learned representations with the predictive random variables. In particular, (Chechik & Tishby, 2003; Chechik et al., 2005; Goyal et al., 2020) is most similar to our setup where they consider additional (nuisance) predictive information. Rather than using MI, we draw inspiration from the style transfer literature (Gatys et al., 2015; Li et al., 2017b; Krichene et al., 2018; Sastry & Oore, 2019) to compare correlations between feature activations of relevant versus irrelevant examples during training. 6 CONCLUSION In summary, we considered the problem of extracting representations with task-relevant information from high-dimensional data. We introduced a new framework, CIM, which learns input-space transformations of the data via a triplet loss to mitigate the effect of irrelevant input features on downstream performance. Through experiments on (1) classification with nuisance background information; (2) OOD domain generalization; and (3) preservation of uniform subgroup accuracy, we showed that CIM achieves good performance despite the presence of spurious correlations in the data and outperforms most relevant baselines. Additionally, we demonstrated that CIM is complementary to other representation learning frameworks such as VIB. For future work, it would be interesting to test different types of distance metrics for the triplet loss, to explore whether CIM can be used as an effective way to learn views for unsupervised contrastive learning, and to investigate label-free approaches for learning the input transformations. A ADDITIONAL EXPERIMENTAL DETAILS A.1 ARCHITECTURES In Figure 5, we show the detailed TN architectures used for RGB and point-cloud data. A.2 HYPERPARAMETER CONFIGURATIONS AND TRAINING DETAILS Variational Information Bottleneck (VIB). We used ResNet-50 as the encoder in VIB because most methods we compare CIM with are based on ResNet-50. We tested two different settings for VIB after the encoder: (a) apply KL regularization on encoder’s last layer Lf of size (1, 2048) and compute the cross-entropy loss on the regularized feature vector; (b) apply KL on the feature vector similar to (a), but add 3 fully connected layers of (1024, ReLu, batch normalization), (512, ReLu, batch normalization), and (256, ReLu), then calculate the cross-entropy loss; (c) add a fully connected layer of size 512 after Lf , then follow the steps as in (a). For colored MNIST we used architecture (c) and trained the model using Adam optimizer with a learning rate set to 0.0001 and batch size of 64. For celebA and Waterbirds, we used architecture (b) with Adam optimizer and learning rate of 0.001 and batch size of 64. For all the above experiments we set the weight for KL regularization term to be 0.001 and the standard deviation of to be 0.1. InfoMask. We used the default architecture (Taghanaki et al., 2019) except for changing the encoding part to be ResNet-50. For celebA experiments, we used Adam optimizer with a learning rate of 0.0001 and a batch size of 32. For Waterbirds, we trained the model using SGD optimizer with a learning rate of 0.001 and a momentum of 0.9. Similar to VIB, we set the KL term weight to be 0.001 and the standard deviation of to be 0.1. We tested different threshold values for the masking function and obtained the best results with just soft masking i.e. when the threshold is set to zero. Point Cloud Experiments. For PointNet, we used Adam optimizer with a learning rate of 0.0001 and a batch size of 32. We trained both the original and CIM based model with rotated and jittered input data. Colored MNIST. We resized images to (64 × 64 × 3) and trained all the models using Adam optimizer with a learning rate of 0.0001 and batch size of 64. For VIB, we set the KL divergence contribution weight to 0.001. Domain Generalization. We use ResNet-18 as the backbone to make a fair comparison with stateof-the-art. We train CIM using Adam optimizer with learning rate of 0.0001 and batch size of 64. We use the same training and test splits as those used in the work with (Carlucci et al., 2019). For CIM-based models, we set λ = 0.0001 and other hyper-parameters are summarized in Table 4. To control the level of input re-weighting, we minimize negative entropy on m with a Lagrangian multiplier ζ = 0.00001. B ADDITIONAL EXPERIMENTAL RESULTS B.1 BACKGROUND CHALLENGE We include for completeness the entirety of the results from (Xiao et al., 2020). We note that our results are not directly comparable with those from other architectures (e.g. WRN-50x2), as we used ResNet-50 as our base classifier. B.2 3D POINT CLOUD CLASSIFICATION In Table 6, we report the classification results on normal and rotated objects. As the first row of the table summarizes, PointNet performs well on average on the 40 classes. However, when we increase spurious correlations by rotating the objects, class-wise accuracies significantly drop, resulting in a 16.1% performance degradation in the average accuracy of the model (second row). After applying CIM, the spurious correlation between different categories is reduced, thus class-wise accuracy of challenging objects is improved (third row). B.3 ABLATION STUDIES We construct an ablation study on the CelebA dataset to study the effects of the Gramian-based contrastive loss. As shown in Table 7, we find that learning a simple attention-like weighting matrix without any regularization performs better than ERM. We also observed that having both positive and negative samples in the TN’s loss function performs better compared to having only positives or negatives. It is worth mentioning the negative samples have a greater impact on the performance in comparison to the positivies.
1. What is the focus of the review, and what are the reviewer's main concerns? 2. What are the strengths and weaknesses of the proposed representative learning framework? 3. How does the reviewer assess the impact of hyperparameters, specifically lambda and alpha? 4. Why did the authors omit VIB in some tables, and how does the reviewer suggest improving the comparison between methods? 5. What are the differences in setups between the original GDRO paper and the current work, and how do these affect the results? 6. Why did the authors choose not to test their method on the MNLI dataset, and how does the reviewer view this decision? 7. How does the reviewer evaluate the tuning of hyperparameters for the baseline methods, particularly VIB? 8. What is the purpose of the random digit reconstruction task, and how does the reviewer view its effectiveness in evaluating the method's performance? 9. How does the reviewer suggest improving the experiment's design and metrics to better correlate with the spurious red box and background? 10. Are there any limitations or biases in the reviewer's comments, and how might they be addressed?
Review
Review They propose a representative learning framework to mitigate task-irrelevant information. Computational costs: It would be nice to add a table of results on comparing the performance of the methods. In CIM, since the method needs to construct \Sigma_XX, the method can be slower than the baselines, could you comment on it? Questions to the authors: could the authors comment in table 3, why larger models are obtaining worst-group accuracy? The method has two hyper-parameters of lambda, alpha => could the authors add a discussion on their impact? In table 1, adding VIB seems to be the basic baseline nice to have, could you add VIB in table 1? I wonder why VIB is omitted in some of the tables. numbers reported in the original GDRO paper (Sagawa et al, 2019) are higher than reported in table 3, could you specify which setup in their work you report and explain why the results differ from Sagawa et al, 2019? (Sagawa et al, 2019 had multiple setups in their work.) Sagawa et al, 2019 which is a strong baseline for comparison for your work, also had MNLI as one of the experiments, could you explain why you did not test on this dataset? in table 2 JiGen is based on alexnet, not resnet-18 and direct comparison is not meaningful, you need to use the same encoder to provide a fair comparison between methods. Again could the authors comment on why VIB is omitted in table 2? In page 13, A.2, what is \epsilon? could you confirm which models you have used for the baseline? are they all use the same encoder? Have you tuned \epsilon for your method? Overall, I think the authors have tuned several hyper-parameters like batch_size, lr, alpha, \epsilon, \lambda for their method, but this is not done for the baselines like VIB. On Appendix A.2, page 14 above table 4, what is m? what is re-weighting tuning? is this finetuning also done for the baselines? In figure 3, could the author comment why they have not used the same setup as 5.2 in Arjisky paper where they have multiple digits and decided to use the simpler setup with two digits? Also Kim, 2019 used multiple digits. Reasons to reject: I have doubts about the way VIB is tuned in this work, looking into appendix, I do not see that parameter \beta in VIB paper, which specify the weights between the cross-entropy loss, and the compression loss is tuned. Given that the proposed method has two hyper-parameters, for a fair comparison well tuning the baselines' hyper-parameters are needed. Also, looking in table 4, it seems that the authors have tuned learning rate, batchsize, .. for CIM but not the other baseline methods. In section 3.2, the task is to reconstruct a random digit from the input class, which I cannot make much sense of it, could the authors comment why reconstructing a random digit from the input? Since the digit needs to be constructed is random, how would you evaluate the performance of the method? The other shortcoming with this experiment is that the authors are showing a handful of figures which does not even match between the first and second row, so one can compare them, and they do not report any evaluation metric. Also to me black background and red square are both spurious since you want to reconstruct the digit, and it is not clear why authors think one is a relevant signal while the other is spurious in section 3.1. Still, they need to report the proper evaluation metric on how the reconstructed images are correlated with the spurious red box and background.
ICLR
Title Learning Task-Relevant Features via Contrastive Input Morphing Abstract A fundamental challenge in artificial intelligence is learning useful representations of data that yield good performance on a downstream classification task, without overfitting to spurious input features. Extracting task-relevant predictive information becomes particularly challenging for high-dimensional, noisy, real-world data. We propose Contrastive Input Morphing (CIM), a representation learning framework that learns input-space transformations of the data to mitigate the effect of irrelevant input features on downstream performance via a triplet loss. Empirically, we demonstrate the efficacy of our approach on various tasks which typically suffer from the presence of spurious correlations, and show that CIM improves the performance of other representation learning methods such as variational information bottleneck (VIB) when used in conjunction. N/A A fundamental challenge in artificial intelligence is learning useful representations of data that yield good performance on a downstream classification task, without overfitting to spurious input features. Extracting task-relevant predictive information becomes particularly challenging for high-dimensional, noisy, real-world data. We propose Contrastive Input Morphing (CIM), a representation learning framework that learns input-space transformations of the data to mitigate the effect of irrelevant input features on downstream performance via a triplet loss. Empirically, we demonstrate the efficacy of our approach on various tasks which typically suffer from the presence of spurious correlations, and show that CIM improves the performance of other representation learning methods such as variational information bottleneck (VIB) when used in conjunction. 1 INTRODUCTION At the heart of modern machine learning is the problem of representation learning, or extracting features from raw data that enable predictions with high accuracy (Hinton & Salakhutdinov, 2006; Vincent et al., 2010; Chen et al., 2016; Van Den Oord et al., 2017; Oord et al., 2018). Despite the recent successes of deep neural networks (Dean et al., 2012; LeCun et al., 2015), their rapidly growing size and large-scale training procedures, coupled with high-dimensional data sources, pose significant challenges in learning models that perform well on a given task without overfitting to spurious input features (Zhang et al., 2016; Ilyas et al., 2019; Geirhos et al., 2020). As a result, trained networks have been shown to fail spectacularly on out-of-domain generalization tasks (Beery et al., 2018; Rosenfeld et al., 2018) and for rare subgroups present in data (Hashimoto et al., 2018; Goel et al., 2020), among others. A wide range of methods have been proposed to tackle this problem, including regularization, data augmentation, leveraging causal explanations, and self-training (Srivastava et al., 2014; Chen et al., 2020b; Sagawa et al., 2019; Chen et al., 2020b). In particular, prior art places a heavy emphasis on lossless access to the input data during training, then constructing a high-level representation which extracts the necessary information. Yet it is reasonable to assume that in some cases, we desire access to only a subset of the input which is relevant to the task – for example, the background color in an image of a “7” is unnecessary for identifying its digit class. The fundamental challenge, then, is discerning which parts of the input are relevant without requiring privileged information (e.g., the nature of the downstream task) at training time. Our approach, Contrastive Input Morphing (CIM), uses labeled supervision to learn input-space transformations of the data that mitigate the effect of irrelevant input features on predictive performance. Though the Data Processing Inequality (Cover, 1999) states that no amount of input processing can increase its mutual information (MI) with the predictive variable, we propose to transform the data in such a way that it makes it easier for the model to extract the relevant predictive information for the downstream task – that is, we attempt to increase the amount of usable information for our representations (Xu et al., 2020). We emphasize that our method does not assume access to the exact nature of the downstream task, such as attribute labels for rare subgroups. The key workhorse of CIM is an auxiliary network called the Transformation Network (TN). Leveraging ideas from neural style transfer (Gatys et al., 2015; Li et al., 2017b), the TN is trained via a triplet loss on feature correlation matrices (Schroff et al., 2015; Koch, 2015). Intuitively, this objective uses the shared information from competing classes (“negative examples”) as a proxy for spurious correlations, while leveraging the shared information within the same class (“positive examples”) as a heuristic for task-relevancy (Khosla et al., 2020). The framework for CIM is quite general: it is (1) complementary to MI-based representation learning techniques such as variational information bottleneck (VIB) (Alemi et al., 2016); and (2) can be used as a plug-in module for training any classifier. For the flowchart of the training procedure of the CIM refer to Figure 1. Empirically, we evaluate CIM on three settings that suffer from spurious correlations: classification with nuisance background information, out-of-domain (OOD) generalization, and improving accuracy uniformly across subgroups. In the first task, CIM outperforms ERM on colored MNIST and improves over the ResNet-50 baseline on the Background Challenge (Xiao et al., 2020). Similarly, CIM outperforms relevant baselines using ResNet-18 on the VLCS dataset (Torralba & Efros, 2011) for OOD generalization. For subgroup accuracies, CIM outperforms both supervised and unsupervised methods on CelebA (Liu et al., 2015) in terms of worst-group accuracy (by 1.7% and 41.4% respectively), while outperforming unsupervised methods by up to 12.9% on Waterbirds. In summary, our contributions in this work can be outlined as follows: 1. We propose CIM, a method demonstrating that lossy access to input data helps extract good task-relevant representations. 2. We show that CIM is complementary to existing methods, as the learned transformations can be leveraged by other MI-based representation learning techniques such as VIB. 3. We empirically verify the robustness of the learned representations to spurious correlations on a variety of tasks (Section 4). 2 PRELIMINARIES We consider the standard supervised learning setup where x ∈ X ⊆ Rd is the input variable, and y ∈ Y = {1, . . . , k} is the set of corresponding labels. We assume access to samples D = {(xi, yi)}ni=1 drawn from an underlying (unknown) joint distribution pdata(x, y), and use capital letters to denote random variables, e.g. X and Y . We use P (X,Y ) to denote their joint distribution as well as P (·) for the respective marginal (e.g. P (X) for the marginal distribution of X). 2.1 BACKGROUND AND PROBLEM SETUP Our goal is to learn a classifier fθ : X → Y , where fθ ∈ Θ achieves low error according to some loss function ` : Θ× (X × Y)→ R. Specifically, we minimize the empirical risk: Lsup(θ) = Ex,y∼pdata(x,y)[`(fθ(x), y)] ≈ n∑ i=1 `(fθ(xi), yi) (1) In addition to good classification performance, we aim to learn representations of the data, which: (a) are highly predictive of the downstream task; and (b) do not rely on spurious input features. That is, the learned representations should be task-relevant. Information bottleneck. A natural way to measure “task-relevance” in random variables is to consider the total amount of information that a compressed (stochastic) representation Z contains about the input variable X and the output variable Y . In particular, information bottleneck (IB) (Tishby et al., 2000; Chechik et al., 2005; Alemi et al., 2016) is a framework which utilizes mutual information (MI) to quantify this dependence via the following objective: min P (Z|X) I(X;Z)− βI(Z;Y ) (2) where β > 0 controls the importance of obtaining good performance on the downstream task. Given two random variables X and Y , I(X;Y ) is computed as DKL(P (X,Y )||P (X)P (Y )), where DKL denotes the Kullback-Leibler (KL) divergence between two probability distributions. The IB framework can be extended to account for additional sources of input data that is known to contain irrelevant information about the predictive task. This setting, known as IB with side information (Chechik & Tishby, 2003), adds a term in the IB objective, which simultaneously minimizes the MI between this nuisance variable and the learned representation. Concretely, given random variables (X,Y+, Y−) where Y+ denotes the task of interest and Y− denotes a spurious auxiliary variable, the objective becomes: min P (Z|X) I(X;Z)− β(I(Z;Y+)− γI(Z;Y−)) (3) where γ is another tunable hyperparameter for the nuisance task. We note that this framework bears resemblance to triplet-based losses such as (Schroff et al., 2015; Koch, 2015), as well as contrastive learning approaches that leverage MI maximization (Linsker, 1988; Hjelm et al., 2018; Oord et al., 2018; Tian et al., 2019; Khosla et al., 2020). It is also in line with the InfoMin principle suggested by (Tian et al., 2020), for learning good views in self-supervised contrastive learning. Although the MI framework is compelling, as it captures arbitrarily complex dependencies between random variables, there exist several challenges with their use in practice. The difficulty of computing MI in high dimensions, for example, is well-documented, demands the use of various neural estimators (Barber & Agakov, 2003; Gutmann & Hyvärinen, 2010; Oord et al., 2018; Belghazi et al., 2018; Poole et al., 2019; Song & Ermon, 2019). Additionally, approaches such as IB posit restrictive assumptions on the relationships between Y+ and Y−; namely, that they must be conditionally independent given X , which is difficult to enforce. 3 CONTRASTIVE INPUT MORPHING Motivated by the above challenges, we propose to approximate the information content between task-relevant and -irrelevant features via correlations in higher-dimensional feature spaces. This procedure helps our method learn the appropriate input-space transformations. 3.1 MEASURING RELEVANCE VIA CORRELATIONS Another way to measure “task-relevance” in random variables is to consider their conditional dependencies, as captured by their covariances. Specifically, consider a feature map φ : X → Rd that takes in an input and returns a representation φ(x) that is of the same dimensionality as the original input x. We can use this feature map to construct a covariance (Gram) matrix of φ(x), where ΣXX = φ(x)Tφ(x). Although the covariance only measures linear dependencies between the input, we can capture more complex relationships via an arbitrarily complex feature map φ. Training Procedure: For the Transformation Network (TN), we utilize a convolutional autoencoder to obtain a reconstructed image of the same dimensionality as the input, as shown in Figure 2. Our method operates over triplets: (x, x+, x−), where (x, x+) denote examples from the same class while x− is an example from a different class than x. We use a supervised contrastive loss to train the network, similar in spirit to (Khosla et al., 2020). Specifically, we learn an intermediate feature map φ : X → R(H×W )×C using the TN that takes in an input x and returns a representation φ(x) that is of the same dimensionality as the original input, where (H ×W ) denotes the height and width of the image, and C denotes the number of channels. We use this feature map to construct a Gram matrix of the input features, where ΣXX = φ(x)Tφ(x). Then, the triplet loss encourages the positive examples’ Gram matrix representations to move closer together in embedding space to those of the input, while ensuring that the negative examples’ representations are further apart: Lcon(φ) = min φ ||ΣXX ,ΣX+X+ ||2 −max(α, ||ΣXX ,ΣX−X− ||2) (4) for some margin α > 0. The output of φ(·) is then passed through a 1 × 1 2-D convolution layer with a sigmoid activation to produce a (single channel) soft “mask” m(x), which is then multiplied with the original input image x to obtain the final representation ψ(x) = x ◦ m(x). Finally, the classifier fθ(·)is trained on the transformed input image ψ(x). Learning Objective: The overall loss function can be written as: L(φ, θ) = λLcon(φ) + Lsup(θ) (5) where λ is a multiplier which controls the contribution of the TN loss from Lcon(φ) from Equation 4 and Lsup(θ) is the standard cross entropy loss for multi-class classification. The parameters for the transformation network (φ) and the classifier (θ) are trained jointly. In our experiments, we found that values of λ = 0.0001 worked well. It is well known that Lcon can be interpreted as minimizing a specific form of Maximum Mean Discrepancy (MMD) (Gatys et al., 2015; Li et al., 2017b). For the identity map φ(·), Equation 5 is equivalent to minimizing MMD between two kernelized inputs where the specific kernel is the second-order polynomial kernel. In this way, CIM’s Transformation Network can also be seen as minimizing the distance between the mean embeddings of the underlying distributions for X and X+ while simultaneously maximizing the distance for those of X and X−. 3.2 A MOTIVATING EXAMPLE We present a concrete example for the intuition behind our approach using the MNIST dataset (LeCun, 1998). We first construct a challenging input reconstruction task, in which a red square is placed on the bottom right of all samples and the model is trained to reconstruct a random digit that is different from the input’s class. As the TN is trained as an independent autoencoding module, we find that the TN learns to pick up shared signals across inputs (i.e., the black background) before converging to the red square as the source of shared (spurious) correlations among examples. Next, we evaluate whether we can remove this source of variation for digit classification by passing a lossy version of the input into the classifier (Figure 1). As shown in Figure 2 (bottom), the input transformation learned by the model (i.e. ψ(x) = m ◦ x) de-emphasizes the shared features while highlighting the task-relevant features. 4 EXPERIMENTAL RESULTS For our experiments, we are interested in empirically investigating the following questions: 1. Are CIM’s learned representations robust to spurious correlations in the input features? 2. Does the input transformation learned by CIM improve domain generalization? 3. How well can CIM preserve classification accuracy across subgroups? Datasets: We consider various datasets to test the effectiveness of our method. We first construct a colored variant of MNIST to demonstrate that CIM successfully ignores nuisance background information in a digit classification task, then further explore this finding on the Background Challenge (Xiao et al., 2020). Next, we evaluate CIM on the VLCS dataset (Torralba & Efros, 2011) to demonstrate that the input transformations help in learning representations that generalize to outof-domain distributions. Then, we study two benchmark datasets, CelebA (Liu et al., 2015) and Waterbirds (Wah et al., 2011; Zhou et al., 2017; Sagawa et al., 2019), to show that CIM preserves subgroup accuracies. Models: We use different classifier architectures depending on the downstream task. While ResNet50 is the default choice for most datasets, we also utilize Inception-ResNetV2 (Szegedy et al., 2016) to obtain better performance, ResNet-18 for a fair comparison with existing OOD generalization techniques, and PointNet (Qi et al., 2017) for 3D point cloud classification. We also experiment with Variational Information Bottleneck (VIB) (Alemi et al., 2016) as both a complementary and competing approach to CIM, and use ResNet-50 as the VIB encoder. We refer the reader to Appendix A.2 for additional details on model architectures and hyperparameters. We note that the transformed inputs and the feature maps Φ are semantically meaningful as shown in Figure 4. 4.1 CLASSIFICATION WITH NUISANCE BACKGROUND INFORMATION Colored MNIST: First, we assess whether CIM can distinguish between two MNIST digit classes (2 and 7) in the presence of a spurious input feature (background color). As outlined in Figure 3(a), we construct a dataset such that a classifier will achieve low accuracy by relying on background color. For a given proportion α, we color α% of all digits labeled “2” in the training set with blue backgrounds, and color the remaining (1−α)% labeled “7” with yellow backgrounds. We vary this proportion by α = {0.5%, 1%, 2%}. At test time, we color all the digits labeled “2” in blue, while coloring the “7” digits in yellow. As shown in Figure 3 (b), CIM is better able to utilize relevant information for the downstream classification task in comparison to ERM by 13%, 10.5%, and 3% on models trained with α = {0.5%, 1%, 2%} respectively. Perhaps more interestingly, a hybrid approach of VIB + CIM outperforms all other methods – this suggests that the input transformations learned by CIM are indeed preserving task-relevant information that can be better leveraged by InfoMax methods such as VIB. More experimental details can be found in Appendix A.2. The Background Challenge: Next, we evaluate whether the favorable results from MNIST translate to a more challenging setup, and test CIM on the Background Challenge (Xiao et al., 2020). The Background Challenge is a public dataset consisting of ImageNet-9 (Deng et al., 2009) test sets with varying amounts of foreground and background signals, designed to measure the extent to which deep classifiers rely on spurious features for image classification. As shown in Table 1, CIM outperforms the original ResNet-50’s performance by 4.1% on Mixed-rand, 0.8% on Mixed-same, and 0.5% on the original test set. Mixed-rand refers to the setting where the foreground is overlaid onto a random background, while Mixed-same corresponds to the test set where the foreground is placed on a background from the same class. These results demonstrate that CIM indeed learns task-relevant representations without relying on nuisance background information. 4.2 CIM GENERALIZES OVER DIFFERENT DOMAINS In this experiment, we evaluate CIM on OOD generalization performance using the VLCS benchmark (Torralba & Efros, 2011). VLCS consists of images from five object categories shared by the PASCAL VOC 2007, LabelMe, Caltech, and Sun datasets, which are considered to be four separate domains. We follow the standard evaluation strategy used in (Carlucci et al., 2019), where we partition each domain into a train (70%) and test set (30%) by random selection from the overall dataset. As summarized in Table 2, CIM outperforms state-of-the-art methods based on ResNet-18 on each domain, bolstering our claim that using a lossy transformation of the input is helpful for learning task-relevant representations that generalize across domains. 4.3 CIM PRESERVES SUBGROUP PERFORMANCE In this experiment, we investigate whether representations learned by CIM perform well on all subgroups on the CelebA and Waterbirds datasets. Preserving good subgroup-level accuracy is challenging for naive ERM-based methods, given their tendency to latch onto spurious correlations (Kim et al., 2019; Arjovsky et al., 2019; Sagawa et al., 2020; Chen et al., 2020b). Most prior works leverage privileged information such as group labels to mitigate this effect (Ben-Tal et al., 2013; Vapnik & Izmailov, 2015; Sagawa et al., 2019; Goel et al., 2020; Xiao et al., 2020). As TN in CIM is trained to capture task-relevant features and minimize nuisance correlations between classes, we hypothesize that CIM should perform well at the subgroup level even without explicit group label information. For a fair comparison with the prior work, we use ResNet-50 as the backbone classifier for the CIM , but also train both ERM and CIM with an Inception-ResNetV2 (Szegedy et al., 2016) backbone to assess the impact of using a larger model (denoted by ERM* and CIM*, respectively). We also use ResNet-50 for VIB’s encoder and InfoMask’s discriminator (see Appendix A.2). Table 3 shows that CIM outperforms both supervised and unsupervised methods on CelebA in terms of worst-group accuracy (2.4% improvement over CAMEL, the top-performing supervised model), and outperforms unsupervised models while significantly improving over ERM on the Waterbirds dataset (16.7% increase). We emphasize that the favorable performance of CIM is obtained without using subgroup labels, in contrast with previous approaches. We refer the reader to Appendix B.3 for further details and ablation studies regarding the different components of our method. 5 RELATED WORK Our work bridges several lines of work in contrastive learning and learning representations that are robust to spurious correlations. Contrastive representation learning. There has been a flurry of recent work in contrastive methods for representation learning, which encourages an encoder network to map “positive” examples closer together in a latent embedding space while spreading the “negative” examples further apart (Oord et al., 2018; Hjelm et al., 2018; Wu et al., 2018; Tian et al., 2019; Arora et al., 2019; Chen et al., 2020a). Included are triplet-based losses (Schroff et al., 2015; Koch, 2015) and noise contrastive estimation losses (Gutmann & Hyvärinen, 2010). In particular, recent work (Tian et al., 2020; Wu et al., 2020) has shown that minimizing MI between views while maximizing predictive information of the representations with respect to the downstream task, leads to performance improvements, similar to IB (Chechik & Tishby, 2003). While most contrastive approaches are selfsupervised, (Khosla et al., 2020) utilizes class labels as part of their learning procedure, similar to our approach. We emphasize that CIM is not meant to be directly comparable to the aforementioned techniques, as our objective is to learn input transformations of the data that are task-relevant. Robustness of representations Several works have considered the problem of learning relevant features that do not rely on spurious correlations with the predictive task (Heinze-Deml & Meinshausen, 2017; Sagawa et al., 2020; Chen et al., 2020b). Though (Wang et al., 2019) is similar in spirit to CIM, they utilize gray-level co-occurrence matrices as the spurious (textural) information of the input images, then regress out this information from the trained classifier’s output layer. Our method does not solely rely on textural features and can learn any transformation of the input space that is relevant for the downstream task of interest. Although CIM also bears resemblance to InfoMask (Taghanaki et al., 2019), our method is not limited to attention maps. (Kim et al., 2019) uses an MI-based objective to minimize the effect of spurious features, while (Pensia et al., 2020) additionally incorporates regularization via Fisher information to enforce robustness of the features. On the other hand, CIM uses an orthogonal approach to learn robust representations via higher-order correlations in the features. Information in representations There is a rich body of work which focuses on quantifying the amount of information necessary to perform well on a downstream task (Achille & Soatto, 2018). CIM is reminiscent of InfoMax (Linsker, 1988) and IB-based approaches (Tishby et al., 2000; Alemi et al., 2016) which propose to maximize the MI in the learned representations with the predictive random variables. In particular, (Chechik & Tishby, 2003; Chechik et al., 2005; Goyal et al., 2020) is most similar to our setup where they consider additional (nuisance) predictive information. Rather than using MI, we draw inspiration from the style transfer literature (Gatys et al., 2015; Li et al., 2017b; Krichene et al., 2018; Sastry & Oore, 2019) to compare correlations between feature activations of relevant versus irrelevant examples during training. 6 CONCLUSION In summary, we considered the problem of extracting representations with task-relevant information from high-dimensional data. We introduced a new framework, CIM, which learns input-space transformations of the data via a triplet loss to mitigate the effect of irrelevant input features on downstream performance. Through experiments on (1) classification with nuisance background information; (2) OOD domain generalization; and (3) preservation of uniform subgroup accuracy, we showed that CIM achieves good performance despite the presence of spurious correlations in the data and outperforms most relevant baselines. Additionally, we demonstrated that CIM is complementary to other representation learning frameworks such as VIB. For future work, it would be interesting to test different types of distance metrics for the triplet loss, to explore whether CIM can be used as an effective way to learn views for unsupervised contrastive learning, and to investigate label-free approaches for learning the input transformations. A ADDITIONAL EXPERIMENTAL DETAILS A.1 ARCHITECTURES In Figure 5, we show the detailed TN architectures used for RGB and point-cloud data. A.2 HYPERPARAMETER CONFIGURATIONS AND TRAINING DETAILS Variational Information Bottleneck (VIB). We used ResNet-50 as the encoder in VIB because most methods we compare CIM with are based on ResNet-50. We tested two different settings for VIB after the encoder: (a) apply KL regularization on encoder’s last layer Lf of size (1, 2048) and compute the cross-entropy loss on the regularized feature vector; (b) apply KL on the feature vector similar to (a), but add 3 fully connected layers of (1024, ReLu, batch normalization), (512, ReLu, batch normalization), and (256, ReLu), then calculate the cross-entropy loss; (c) add a fully connected layer of size 512 after Lf , then follow the steps as in (a). For colored MNIST we used architecture (c) and trained the model using Adam optimizer with a learning rate set to 0.0001 and batch size of 64. For celebA and Waterbirds, we used architecture (b) with Adam optimizer and learning rate of 0.001 and batch size of 64. For all the above experiments we set the weight for KL regularization term to be 0.001 and the standard deviation of to be 0.1. InfoMask. We used the default architecture (Taghanaki et al., 2019) except for changing the encoding part to be ResNet-50. For celebA experiments, we used Adam optimizer with a learning rate of 0.0001 and a batch size of 32. For Waterbirds, we trained the model using SGD optimizer with a learning rate of 0.001 and a momentum of 0.9. Similar to VIB, we set the KL term weight to be 0.001 and the standard deviation of to be 0.1. We tested different threshold values for the masking function and obtained the best results with just soft masking i.e. when the threshold is set to zero. Point Cloud Experiments. For PointNet, we used Adam optimizer with a learning rate of 0.0001 and a batch size of 32. We trained both the original and CIM based model with rotated and jittered input data. Colored MNIST. We resized images to (64 × 64 × 3) and trained all the models using Adam optimizer with a learning rate of 0.0001 and batch size of 64. For VIB, we set the KL divergence contribution weight to 0.001. Domain Generalization. We use ResNet-18 as the backbone to make a fair comparison with stateof-the-art. We train CIM using Adam optimizer with learning rate of 0.0001 and batch size of 64. We use the same training and test splits as those used in the work with (Carlucci et al., 2019). For CIM-based models, we set λ = 0.0001 and other hyper-parameters are summarized in Table 4. To control the level of input re-weighting, we minimize negative entropy on m with a Lagrangian multiplier ζ = 0.00001. B ADDITIONAL EXPERIMENTAL RESULTS B.1 BACKGROUND CHALLENGE We include for completeness the entirety of the results from (Xiao et al., 2020). We note that our results are not directly comparable with those from other architectures (e.g. WRN-50x2), as we used ResNet-50 as our base classifier. B.2 3D POINT CLOUD CLASSIFICATION In Table 6, we report the classification results on normal and rotated objects. As the first row of the table summarizes, PointNet performs well on average on the 40 classes. However, when we increase spurious correlations by rotating the objects, class-wise accuracies significantly drop, resulting in a 16.1% performance degradation in the average accuracy of the model (second row). After applying CIM, the spurious correlation between different categories is reduced, thus class-wise accuracy of challenging objects is improved (third row). B.3 ABLATION STUDIES We construct an ablation study on the CelebA dataset to study the effects of the Gramian-based contrastive loss. As shown in Table 7, we find that learning a simple attention-like weighting matrix without any regularization performs better than ERM. We also observed that having both positive and negative samples in the TN’s loss function performs better compared to having only positives or negatives. It is worth mentioning the negative samples have a greater impact on the performance in comparison to the positivies.
1. What is the main contribution of the paper regarding contrastive learning? 2. What are the strengths of the proposed approach, particularly in its technical soundness and improvements in downstream classification, OOD generalization, and subgroup performances? 3. Do you have any concerns or comments regarding the sampling of positive and negative examples, the assumption of access to information, and the difference between supervised and self-supervised contrastive methods? 4. How does the triplet objective enforces similarity in textures of positive examples, and what would be the result of applying the triplet objective directly on perceptual features? 5. What is the role of α in negative examples distance in Eq. 4, and how does it determine the distance for negative hard-mining? 6. Can you explain the experimental setup and provide more details on selecting positive and negative samples for each experiment, especially in Sec. 4.1? 7. Are there any other questions or suggestions you have regarding the paper's content or presentation?
Review
Review Summary The paper proposes a contrastive approach to reduce the effects of spurious correlations on a downstream prediction task. The method employs triplet loss on the gram-matrices of image features. This objective minimizes the distance between the gram matrices of the positive samples and maximizes this distance for negative samples. Consequently, this contrastive setting helps to extract features that are invariant in the positive examples. The paper further investigates the robustness of learning such invariant features. Strong Points I find the approach interesting and technically sound. The method also shows improvements in downstream classification, OOD generalization, and subgroup performances. Concerns and Comments: The paper does not provide enough details on how positive and negative examples are sampled. This is very important for the relevant downstream classification task, as shown in the table in Appendix B.3. I would be keen to know the details of selecting positive and negative samples for each experiment. Also, the assumption of having access to the information regarding positive and negative examples is too strong. I understand why it helpful for the method, but can the authors provide motivation or some realistic setting where this assumption may hold. I would like the authors to comment on how this method differs from other invariant representations learning methods. Self-supervised contrastive methods also learn certain invariances, does the supervised contrastive setting provide us some benefit? I am probably confused about this as I didn’t find the explicit explanation of selecting positive and negative samples. In addition to the proposed supervised-contrastive regularization, I am curious if the authors tried a self-supervised contrasting scheme. The triplet objective is defined on the gram matrices which correspond to capturing textures of images (in style transfer literature). Would it be right to say that the proposed method enforces similarity in textures of the positive examples? This would imply that the classifier is forced to be more biased towards the texture information [1]. I am curious if the authors tried the triplet objective directly on the perceptual features, instead of their gram matrices. It would be nice to see the difference. What’s the role of α in negative examples distance in Eq. 4? Is it to determine the distance for negative hard-mining? I think the experiment in Fig 2(bottom) shows that the model is creating some variance in the background region. To me, this resultant image (m o x) seems rather difficult to classify than the input black image. Are the red squares at the bottom of MNIST digits correlated with the digits i.e. square of different colors (or at different positions) appear with different digits? If not then how does the presence of the red square at the bottom of MNIST images makes the task challenging? I think it would make more sense to show the behavior of a model trained on red squares (spuriously correlated with digits) and tested with blue or some other color squares. The experimental description in Sec. 4.1 is hard to understand. Also, the details do not seem consistent with Fig. 3(a) for e.g. in the text, "at test time the digits 2 are colored blue but in Fig. they are colored yellow. Other experiments lack the details on the datasets and the experimental setup such as how positive and negative samples are selected. For e.g. it is unclear how the subgroup tasks are defined. Some of the text is misplaced and focuses on some details which are not relevant to the method. For e.g. a. The objective function L c o n is referenced in the caption of figure 1, without defining or referencing it in Fig 1. b. α is used in Eq. 4 as a margin for distance with negative examples, however, the term α in Fig 4 confused me which refers to the experiment Sec. 4.1. [1] Geirhos et al. ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness
ICLR
Title Learning Task-Relevant Features via Contrastive Input Morphing Abstract A fundamental challenge in artificial intelligence is learning useful representations of data that yield good performance on a downstream classification task, without overfitting to spurious input features. Extracting task-relevant predictive information becomes particularly challenging for high-dimensional, noisy, real-world data. We propose Contrastive Input Morphing (CIM), a representation learning framework that learns input-space transformations of the data to mitigate the effect of irrelevant input features on downstream performance via a triplet loss. Empirically, we demonstrate the efficacy of our approach on various tasks which typically suffer from the presence of spurious correlations, and show that CIM improves the performance of other representation learning methods such as variational information bottleneck (VIB) when used in conjunction. N/A A fundamental challenge in artificial intelligence is learning useful representations of data that yield good performance on a downstream classification task, without overfitting to spurious input features. Extracting task-relevant predictive information becomes particularly challenging for high-dimensional, noisy, real-world data. We propose Contrastive Input Morphing (CIM), a representation learning framework that learns input-space transformations of the data to mitigate the effect of irrelevant input features on downstream performance via a triplet loss. Empirically, we demonstrate the efficacy of our approach on various tasks which typically suffer from the presence of spurious correlations, and show that CIM improves the performance of other representation learning methods such as variational information bottleneck (VIB) when used in conjunction. 1 INTRODUCTION At the heart of modern machine learning is the problem of representation learning, or extracting features from raw data that enable predictions with high accuracy (Hinton & Salakhutdinov, 2006; Vincent et al., 2010; Chen et al., 2016; Van Den Oord et al., 2017; Oord et al., 2018). Despite the recent successes of deep neural networks (Dean et al., 2012; LeCun et al., 2015), their rapidly growing size and large-scale training procedures, coupled with high-dimensional data sources, pose significant challenges in learning models that perform well on a given task without overfitting to spurious input features (Zhang et al., 2016; Ilyas et al., 2019; Geirhos et al., 2020). As a result, trained networks have been shown to fail spectacularly on out-of-domain generalization tasks (Beery et al., 2018; Rosenfeld et al., 2018) and for rare subgroups present in data (Hashimoto et al., 2018; Goel et al., 2020), among others. A wide range of methods have been proposed to tackle this problem, including regularization, data augmentation, leveraging causal explanations, and self-training (Srivastava et al., 2014; Chen et al., 2020b; Sagawa et al., 2019; Chen et al., 2020b). In particular, prior art places a heavy emphasis on lossless access to the input data during training, then constructing a high-level representation which extracts the necessary information. Yet it is reasonable to assume that in some cases, we desire access to only a subset of the input which is relevant to the task – for example, the background color in an image of a “7” is unnecessary for identifying its digit class. The fundamental challenge, then, is discerning which parts of the input are relevant without requiring privileged information (e.g., the nature of the downstream task) at training time. Our approach, Contrastive Input Morphing (CIM), uses labeled supervision to learn input-space transformations of the data that mitigate the effect of irrelevant input features on predictive performance. Though the Data Processing Inequality (Cover, 1999) states that no amount of input processing can increase its mutual information (MI) with the predictive variable, we propose to transform the data in such a way that it makes it easier for the model to extract the relevant predictive information for the downstream task – that is, we attempt to increase the amount of usable information for our representations (Xu et al., 2020). We emphasize that our method does not assume access to the exact nature of the downstream task, such as attribute labels for rare subgroups. The key workhorse of CIM is an auxiliary network called the Transformation Network (TN). Leveraging ideas from neural style transfer (Gatys et al., 2015; Li et al., 2017b), the TN is trained via a triplet loss on feature correlation matrices (Schroff et al., 2015; Koch, 2015). Intuitively, this objective uses the shared information from competing classes (“negative examples”) as a proxy for spurious correlations, while leveraging the shared information within the same class (“positive examples”) as a heuristic for task-relevancy (Khosla et al., 2020). The framework for CIM is quite general: it is (1) complementary to MI-based representation learning techniques such as variational information bottleneck (VIB) (Alemi et al., 2016); and (2) can be used as a plug-in module for training any classifier. For the flowchart of the training procedure of the CIM refer to Figure 1. Empirically, we evaluate CIM on three settings that suffer from spurious correlations: classification with nuisance background information, out-of-domain (OOD) generalization, and improving accuracy uniformly across subgroups. In the first task, CIM outperforms ERM on colored MNIST and improves over the ResNet-50 baseline on the Background Challenge (Xiao et al., 2020). Similarly, CIM outperforms relevant baselines using ResNet-18 on the VLCS dataset (Torralba & Efros, 2011) for OOD generalization. For subgroup accuracies, CIM outperforms both supervised and unsupervised methods on CelebA (Liu et al., 2015) in terms of worst-group accuracy (by 1.7% and 41.4% respectively), while outperforming unsupervised methods by up to 12.9% on Waterbirds. In summary, our contributions in this work can be outlined as follows: 1. We propose CIM, a method demonstrating that lossy access to input data helps extract good task-relevant representations. 2. We show that CIM is complementary to existing methods, as the learned transformations can be leveraged by other MI-based representation learning techniques such as VIB. 3. We empirically verify the robustness of the learned representations to spurious correlations on a variety of tasks (Section 4). 2 PRELIMINARIES We consider the standard supervised learning setup where x ∈ X ⊆ Rd is the input variable, and y ∈ Y = {1, . . . , k} is the set of corresponding labels. We assume access to samples D = {(xi, yi)}ni=1 drawn from an underlying (unknown) joint distribution pdata(x, y), and use capital letters to denote random variables, e.g. X and Y . We use P (X,Y ) to denote their joint distribution as well as P (·) for the respective marginal (e.g. P (X) for the marginal distribution of X). 2.1 BACKGROUND AND PROBLEM SETUP Our goal is to learn a classifier fθ : X → Y , where fθ ∈ Θ achieves low error according to some loss function ` : Θ× (X × Y)→ R. Specifically, we minimize the empirical risk: Lsup(θ) = Ex,y∼pdata(x,y)[`(fθ(x), y)] ≈ n∑ i=1 `(fθ(xi), yi) (1) In addition to good classification performance, we aim to learn representations of the data, which: (a) are highly predictive of the downstream task; and (b) do not rely on spurious input features. That is, the learned representations should be task-relevant. Information bottleneck. A natural way to measure “task-relevance” in random variables is to consider the total amount of information that a compressed (stochastic) representation Z contains about the input variable X and the output variable Y . In particular, information bottleneck (IB) (Tishby et al., 2000; Chechik et al., 2005; Alemi et al., 2016) is a framework which utilizes mutual information (MI) to quantify this dependence via the following objective: min P (Z|X) I(X;Z)− βI(Z;Y ) (2) where β > 0 controls the importance of obtaining good performance on the downstream task. Given two random variables X and Y , I(X;Y ) is computed as DKL(P (X,Y )||P (X)P (Y )), where DKL denotes the Kullback-Leibler (KL) divergence between two probability distributions. The IB framework can be extended to account for additional sources of input data that is known to contain irrelevant information about the predictive task. This setting, known as IB with side information (Chechik & Tishby, 2003), adds a term in the IB objective, which simultaneously minimizes the MI between this nuisance variable and the learned representation. Concretely, given random variables (X,Y+, Y−) where Y+ denotes the task of interest and Y− denotes a spurious auxiliary variable, the objective becomes: min P (Z|X) I(X;Z)− β(I(Z;Y+)− γI(Z;Y−)) (3) where γ is another tunable hyperparameter for the nuisance task. We note that this framework bears resemblance to triplet-based losses such as (Schroff et al., 2015; Koch, 2015), as well as contrastive learning approaches that leverage MI maximization (Linsker, 1988; Hjelm et al., 2018; Oord et al., 2018; Tian et al., 2019; Khosla et al., 2020). It is also in line with the InfoMin principle suggested by (Tian et al., 2020), for learning good views in self-supervised contrastive learning. Although the MI framework is compelling, as it captures arbitrarily complex dependencies between random variables, there exist several challenges with their use in practice. The difficulty of computing MI in high dimensions, for example, is well-documented, demands the use of various neural estimators (Barber & Agakov, 2003; Gutmann & Hyvärinen, 2010; Oord et al., 2018; Belghazi et al., 2018; Poole et al., 2019; Song & Ermon, 2019). Additionally, approaches such as IB posit restrictive assumptions on the relationships between Y+ and Y−; namely, that they must be conditionally independent given X , which is difficult to enforce. 3 CONTRASTIVE INPUT MORPHING Motivated by the above challenges, we propose to approximate the information content between task-relevant and -irrelevant features via correlations in higher-dimensional feature spaces. This procedure helps our method learn the appropriate input-space transformations. 3.1 MEASURING RELEVANCE VIA CORRELATIONS Another way to measure “task-relevance” in random variables is to consider their conditional dependencies, as captured by their covariances. Specifically, consider a feature map φ : X → Rd that takes in an input and returns a representation φ(x) that is of the same dimensionality as the original input x. We can use this feature map to construct a covariance (Gram) matrix of φ(x), where ΣXX = φ(x)Tφ(x). Although the covariance only measures linear dependencies between the input, we can capture more complex relationships via an arbitrarily complex feature map φ. Training Procedure: For the Transformation Network (TN), we utilize a convolutional autoencoder to obtain a reconstructed image of the same dimensionality as the input, as shown in Figure 2. Our method operates over triplets: (x, x+, x−), where (x, x+) denote examples from the same class while x− is an example from a different class than x. We use a supervised contrastive loss to train the network, similar in spirit to (Khosla et al., 2020). Specifically, we learn an intermediate feature map φ : X → R(H×W )×C using the TN that takes in an input x and returns a representation φ(x) that is of the same dimensionality as the original input, where (H ×W ) denotes the height and width of the image, and C denotes the number of channels. We use this feature map to construct a Gram matrix of the input features, where ΣXX = φ(x)Tφ(x). Then, the triplet loss encourages the positive examples’ Gram matrix representations to move closer together in embedding space to those of the input, while ensuring that the negative examples’ representations are further apart: Lcon(φ) = min φ ||ΣXX ,ΣX+X+ ||2 −max(α, ||ΣXX ,ΣX−X− ||2) (4) for some margin α > 0. The output of φ(·) is then passed through a 1 × 1 2-D convolution layer with a sigmoid activation to produce a (single channel) soft “mask” m(x), which is then multiplied with the original input image x to obtain the final representation ψ(x) = x ◦ m(x). Finally, the classifier fθ(·)is trained on the transformed input image ψ(x). Learning Objective: The overall loss function can be written as: L(φ, θ) = λLcon(φ) + Lsup(θ) (5) where λ is a multiplier which controls the contribution of the TN loss from Lcon(φ) from Equation 4 and Lsup(θ) is the standard cross entropy loss for multi-class classification. The parameters for the transformation network (φ) and the classifier (θ) are trained jointly. In our experiments, we found that values of λ = 0.0001 worked well. It is well known that Lcon can be interpreted as minimizing a specific form of Maximum Mean Discrepancy (MMD) (Gatys et al., 2015; Li et al., 2017b). For the identity map φ(·), Equation 5 is equivalent to minimizing MMD between two kernelized inputs where the specific kernel is the second-order polynomial kernel. In this way, CIM’s Transformation Network can also be seen as minimizing the distance between the mean embeddings of the underlying distributions for X and X+ while simultaneously maximizing the distance for those of X and X−. 3.2 A MOTIVATING EXAMPLE We present a concrete example for the intuition behind our approach using the MNIST dataset (LeCun, 1998). We first construct a challenging input reconstruction task, in which a red square is placed on the bottom right of all samples and the model is trained to reconstruct a random digit that is different from the input’s class. As the TN is trained as an independent autoencoding module, we find that the TN learns to pick up shared signals across inputs (i.e., the black background) before converging to the red square as the source of shared (spurious) correlations among examples. Next, we evaluate whether we can remove this source of variation for digit classification by passing a lossy version of the input into the classifier (Figure 1). As shown in Figure 2 (bottom), the input transformation learned by the model (i.e. ψ(x) = m ◦ x) de-emphasizes the shared features while highlighting the task-relevant features. 4 EXPERIMENTAL RESULTS For our experiments, we are interested in empirically investigating the following questions: 1. Are CIM’s learned representations robust to spurious correlations in the input features? 2. Does the input transformation learned by CIM improve domain generalization? 3. How well can CIM preserve classification accuracy across subgroups? Datasets: We consider various datasets to test the effectiveness of our method. We first construct a colored variant of MNIST to demonstrate that CIM successfully ignores nuisance background information in a digit classification task, then further explore this finding on the Background Challenge (Xiao et al., 2020). Next, we evaluate CIM on the VLCS dataset (Torralba & Efros, 2011) to demonstrate that the input transformations help in learning representations that generalize to outof-domain distributions. Then, we study two benchmark datasets, CelebA (Liu et al., 2015) and Waterbirds (Wah et al., 2011; Zhou et al., 2017; Sagawa et al., 2019), to show that CIM preserves subgroup accuracies. Models: We use different classifier architectures depending on the downstream task. While ResNet50 is the default choice for most datasets, we also utilize Inception-ResNetV2 (Szegedy et al., 2016) to obtain better performance, ResNet-18 for a fair comparison with existing OOD generalization techniques, and PointNet (Qi et al., 2017) for 3D point cloud classification. We also experiment with Variational Information Bottleneck (VIB) (Alemi et al., 2016) as both a complementary and competing approach to CIM, and use ResNet-50 as the VIB encoder. We refer the reader to Appendix A.2 for additional details on model architectures and hyperparameters. We note that the transformed inputs and the feature maps Φ are semantically meaningful as shown in Figure 4. 4.1 CLASSIFICATION WITH NUISANCE BACKGROUND INFORMATION Colored MNIST: First, we assess whether CIM can distinguish between two MNIST digit classes (2 and 7) in the presence of a spurious input feature (background color). As outlined in Figure 3(a), we construct a dataset such that a classifier will achieve low accuracy by relying on background color. For a given proportion α, we color α% of all digits labeled “2” in the training set with blue backgrounds, and color the remaining (1−α)% labeled “7” with yellow backgrounds. We vary this proportion by α = {0.5%, 1%, 2%}. At test time, we color all the digits labeled “2” in blue, while coloring the “7” digits in yellow. As shown in Figure 3 (b), CIM is better able to utilize relevant information for the downstream classification task in comparison to ERM by 13%, 10.5%, and 3% on models trained with α = {0.5%, 1%, 2%} respectively. Perhaps more interestingly, a hybrid approach of VIB + CIM outperforms all other methods – this suggests that the input transformations learned by CIM are indeed preserving task-relevant information that can be better leveraged by InfoMax methods such as VIB. More experimental details can be found in Appendix A.2. The Background Challenge: Next, we evaluate whether the favorable results from MNIST translate to a more challenging setup, and test CIM on the Background Challenge (Xiao et al., 2020). The Background Challenge is a public dataset consisting of ImageNet-9 (Deng et al., 2009) test sets with varying amounts of foreground and background signals, designed to measure the extent to which deep classifiers rely on spurious features for image classification. As shown in Table 1, CIM outperforms the original ResNet-50’s performance by 4.1% on Mixed-rand, 0.8% on Mixed-same, and 0.5% on the original test set. Mixed-rand refers to the setting where the foreground is overlaid onto a random background, while Mixed-same corresponds to the test set where the foreground is placed on a background from the same class. These results demonstrate that CIM indeed learns task-relevant representations without relying on nuisance background information. 4.2 CIM GENERALIZES OVER DIFFERENT DOMAINS In this experiment, we evaluate CIM on OOD generalization performance using the VLCS benchmark (Torralba & Efros, 2011). VLCS consists of images from five object categories shared by the PASCAL VOC 2007, LabelMe, Caltech, and Sun datasets, which are considered to be four separate domains. We follow the standard evaluation strategy used in (Carlucci et al., 2019), where we partition each domain into a train (70%) and test set (30%) by random selection from the overall dataset. As summarized in Table 2, CIM outperforms state-of-the-art methods based on ResNet-18 on each domain, bolstering our claim that using a lossy transformation of the input is helpful for learning task-relevant representations that generalize across domains. 4.3 CIM PRESERVES SUBGROUP PERFORMANCE In this experiment, we investigate whether representations learned by CIM perform well on all subgroups on the CelebA and Waterbirds datasets. Preserving good subgroup-level accuracy is challenging for naive ERM-based methods, given their tendency to latch onto spurious correlations (Kim et al., 2019; Arjovsky et al., 2019; Sagawa et al., 2020; Chen et al., 2020b). Most prior works leverage privileged information such as group labels to mitigate this effect (Ben-Tal et al., 2013; Vapnik & Izmailov, 2015; Sagawa et al., 2019; Goel et al., 2020; Xiao et al., 2020). As TN in CIM is trained to capture task-relevant features and minimize nuisance correlations between classes, we hypothesize that CIM should perform well at the subgroup level even without explicit group label information. For a fair comparison with the prior work, we use ResNet-50 as the backbone classifier for the CIM , but also train both ERM and CIM with an Inception-ResNetV2 (Szegedy et al., 2016) backbone to assess the impact of using a larger model (denoted by ERM* and CIM*, respectively). We also use ResNet-50 for VIB’s encoder and InfoMask’s discriminator (see Appendix A.2). Table 3 shows that CIM outperforms both supervised and unsupervised methods on CelebA in terms of worst-group accuracy (2.4% improvement over CAMEL, the top-performing supervised model), and outperforms unsupervised models while significantly improving over ERM on the Waterbirds dataset (16.7% increase). We emphasize that the favorable performance of CIM is obtained without using subgroup labels, in contrast with previous approaches. We refer the reader to Appendix B.3 for further details and ablation studies regarding the different components of our method. 5 RELATED WORK Our work bridges several lines of work in contrastive learning and learning representations that are robust to spurious correlations. Contrastive representation learning. There has been a flurry of recent work in contrastive methods for representation learning, which encourages an encoder network to map “positive” examples closer together in a latent embedding space while spreading the “negative” examples further apart (Oord et al., 2018; Hjelm et al., 2018; Wu et al., 2018; Tian et al., 2019; Arora et al., 2019; Chen et al., 2020a). Included are triplet-based losses (Schroff et al., 2015; Koch, 2015) and noise contrastive estimation losses (Gutmann & Hyvärinen, 2010). In particular, recent work (Tian et al., 2020; Wu et al., 2020) has shown that minimizing MI between views while maximizing predictive information of the representations with respect to the downstream task, leads to performance improvements, similar to IB (Chechik & Tishby, 2003). While most contrastive approaches are selfsupervised, (Khosla et al., 2020) utilizes class labels as part of their learning procedure, similar to our approach. We emphasize that CIM is not meant to be directly comparable to the aforementioned techniques, as our objective is to learn input transformations of the data that are task-relevant. Robustness of representations Several works have considered the problem of learning relevant features that do not rely on spurious correlations with the predictive task (Heinze-Deml & Meinshausen, 2017; Sagawa et al., 2020; Chen et al., 2020b). Though (Wang et al., 2019) is similar in spirit to CIM, they utilize gray-level co-occurrence matrices as the spurious (textural) information of the input images, then regress out this information from the trained classifier’s output layer. Our method does not solely rely on textural features and can learn any transformation of the input space that is relevant for the downstream task of interest. Although CIM also bears resemblance to InfoMask (Taghanaki et al., 2019), our method is not limited to attention maps. (Kim et al., 2019) uses an MI-based objective to minimize the effect of spurious features, while (Pensia et al., 2020) additionally incorporates regularization via Fisher information to enforce robustness of the features. On the other hand, CIM uses an orthogonal approach to learn robust representations via higher-order correlations in the features. Information in representations There is a rich body of work which focuses on quantifying the amount of information necessary to perform well on a downstream task (Achille & Soatto, 2018). CIM is reminiscent of InfoMax (Linsker, 1988) and IB-based approaches (Tishby et al., 2000; Alemi et al., 2016) which propose to maximize the MI in the learned representations with the predictive random variables. In particular, (Chechik & Tishby, 2003; Chechik et al., 2005; Goyal et al., 2020) is most similar to our setup where they consider additional (nuisance) predictive information. Rather than using MI, we draw inspiration from the style transfer literature (Gatys et al., 2015; Li et al., 2017b; Krichene et al., 2018; Sastry & Oore, 2019) to compare correlations between feature activations of relevant versus irrelevant examples during training. 6 CONCLUSION In summary, we considered the problem of extracting representations with task-relevant information from high-dimensional data. We introduced a new framework, CIM, which learns input-space transformations of the data via a triplet loss to mitigate the effect of irrelevant input features on downstream performance. Through experiments on (1) classification with nuisance background information; (2) OOD domain generalization; and (3) preservation of uniform subgroup accuracy, we showed that CIM achieves good performance despite the presence of spurious correlations in the data and outperforms most relevant baselines. Additionally, we demonstrated that CIM is complementary to other representation learning frameworks such as VIB. For future work, it would be interesting to test different types of distance metrics for the triplet loss, to explore whether CIM can be used as an effective way to learn views for unsupervised contrastive learning, and to investigate label-free approaches for learning the input transformations. A ADDITIONAL EXPERIMENTAL DETAILS A.1 ARCHITECTURES In Figure 5, we show the detailed TN architectures used for RGB and point-cloud data. A.2 HYPERPARAMETER CONFIGURATIONS AND TRAINING DETAILS Variational Information Bottleneck (VIB). We used ResNet-50 as the encoder in VIB because most methods we compare CIM with are based on ResNet-50. We tested two different settings for VIB after the encoder: (a) apply KL regularization on encoder’s last layer Lf of size (1, 2048) and compute the cross-entropy loss on the regularized feature vector; (b) apply KL on the feature vector similar to (a), but add 3 fully connected layers of (1024, ReLu, batch normalization), (512, ReLu, batch normalization), and (256, ReLu), then calculate the cross-entropy loss; (c) add a fully connected layer of size 512 after Lf , then follow the steps as in (a). For colored MNIST we used architecture (c) and trained the model using Adam optimizer with a learning rate set to 0.0001 and batch size of 64. For celebA and Waterbirds, we used architecture (b) with Adam optimizer and learning rate of 0.001 and batch size of 64. For all the above experiments we set the weight for KL regularization term to be 0.001 and the standard deviation of to be 0.1. InfoMask. We used the default architecture (Taghanaki et al., 2019) except for changing the encoding part to be ResNet-50. For celebA experiments, we used Adam optimizer with a learning rate of 0.0001 and a batch size of 32. For Waterbirds, we trained the model using SGD optimizer with a learning rate of 0.001 and a momentum of 0.9. Similar to VIB, we set the KL term weight to be 0.001 and the standard deviation of to be 0.1. We tested different threshold values for the masking function and obtained the best results with just soft masking i.e. when the threshold is set to zero. Point Cloud Experiments. For PointNet, we used Adam optimizer with a learning rate of 0.0001 and a batch size of 32. We trained both the original and CIM based model with rotated and jittered input data. Colored MNIST. We resized images to (64 × 64 × 3) and trained all the models using Adam optimizer with a learning rate of 0.0001 and batch size of 64. For VIB, we set the KL divergence contribution weight to 0.001. Domain Generalization. We use ResNet-18 as the backbone to make a fair comparison with stateof-the-art. We train CIM using Adam optimizer with learning rate of 0.0001 and batch size of 64. We use the same training and test splits as those used in the work with (Carlucci et al., 2019). For CIM-based models, we set λ = 0.0001 and other hyper-parameters are summarized in Table 4. To control the level of input re-weighting, we minimize negative entropy on m with a Lagrangian multiplier ζ = 0.00001. B ADDITIONAL EXPERIMENTAL RESULTS B.1 BACKGROUND CHALLENGE We include for completeness the entirety of the results from (Xiao et al., 2020). We note that our results are not directly comparable with those from other architectures (e.g. WRN-50x2), as we used ResNet-50 as our base classifier. B.2 3D POINT CLOUD CLASSIFICATION In Table 6, we report the classification results on normal and rotated objects. As the first row of the table summarizes, PointNet performs well on average on the 40 classes. However, when we increase spurious correlations by rotating the objects, class-wise accuracies significantly drop, resulting in a 16.1% performance degradation in the average accuracy of the model (second row). After applying CIM, the spurious correlation between different categories is reduced, thus class-wise accuracy of challenging objects is improved (third row). B.3 ABLATION STUDIES We construct an ablation study on the CelebA dataset to study the effects of the Gramian-based contrastive loss. As shown in Table 7, we find that learning a simple attention-like weighting matrix without any regularization performs better than ERM. We also observed that having both positive and negative samples in the TN’s loss function performs better compared to having only positives or negatives. It is worth mentioning the negative samples have a greater impact on the performance in comparison to the positivies.
1. What is the main contribution of the paper, and how does it address the problem of disentangling spurious features? 2. What are the strengths of the proposed approach, particularly in its ability to force task-relevant features to arise? 3. What are the weaknesses of the paper regarding its motivating example, writing, and experiments? 4. How does the reviewer suggest improving the paper's clarity and experimental design? 5. Is there a concern or question about the connection between the transformation network and autoencoder?
Review
Review This paper presents a contrastive base approach to try and disentangle spurious ('task irrelevant') features from task-relevant ones. This is an important problem to tackle. The key idea of the paper is that using a contrastive approach to pull together positives and push apart negatives in a transformed (Gram matrix) feature space will lead to the effect of forcing task relevant features to arise. A number of empirical results are provided showing the benefit of their new loss function. However, the paper's motivating example and overall writing is somewhat confusing. Firstly, I really did not understand the motivating example in Figure 2 at all. It is unclear. A more intuitive explanation of why the contrastive loss along with a Gram matrix should lead to the desired task relevance property is missing. Secondly, the loss "CIM + VIB" is used in a number of places but it is unclear what form this loss takes. Since this loss seems to achieve the best results, a more thorough discussion of this loss is warranted. Finally, I also did not quite follow what the connection between the transformation network and autoencoder in To decouple the effect of the transformation from the contrastive loss, a baseline would have been to run the contrastive loss on the output space similar to Khosla et. al. 2020. This seems like an important experiment to run. I think the paper needs more clarity in it's motivation, writing and experiments to really drive home the message which is interesting but still a bit incomplete.
ICLR
Title Learning Task-Relevant Features via Contrastive Input Morphing Abstract A fundamental challenge in artificial intelligence is learning useful representations of data that yield good performance on a downstream classification task, without overfitting to spurious input features. Extracting task-relevant predictive information becomes particularly challenging for high-dimensional, noisy, real-world data. We propose Contrastive Input Morphing (CIM), a representation learning framework that learns input-space transformations of the data to mitigate the effect of irrelevant input features on downstream performance via a triplet loss. Empirically, we demonstrate the efficacy of our approach on various tasks which typically suffer from the presence of spurious correlations, and show that CIM improves the performance of other representation learning methods such as variational information bottleneck (VIB) when used in conjunction. N/A A fundamental challenge in artificial intelligence is learning useful representations of data that yield good performance on a downstream classification task, without overfitting to spurious input features. Extracting task-relevant predictive information becomes particularly challenging for high-dimensional, noisy, real-world data. We propose Contrastive Input Morphing (CIM), a representation learning framework that learns input-space transformations of the data to mitigate the effect of irrelevant input features on downstream performance via a triplet loss. Empirically, we demonstrate the efficacy of our approach on various tasks which typically suffer from the presence of spurious correlations, and show that CIM improves the performance of other representation learning methods such as variational information bottleneck (VIB) when used in conjunction. 1 INTRODUCTION At the heart of modern machine learning is the problem of representation learning, or extracting features from raw data that enable predictions with high accuracy (Hinton & Salakhutdinov, 2006; Vincent et al., 2010; Chen et al., 2016; Van Den Oord et al., 2017; Oord et al., 2018). Despite the recent successes of deep neural networks (Dean et al., 2012; LeCun et al., 2015), their rapidly growing size and large-scale training procedures, coupled with high-dimensional data sources, pose significant challenges in learning models that perform well on a given task without overfitting to spurious input features (Zhang et al., 2016; Ilyas et al., 2019; Geirhos et al., 2020). As a result, trained networks have been shown to fail spectacularly on out-of-domain generalization tasks (Beery et al., 2018; Rosenfeld et al., 2018) and for rare subgroups present in data (Hashimoto et al., 2018; Goel et al., 2020), among others. A wide range of methods have been proposed to tackle this problem, including regularization, data augmentation, leveraging causal explanations, and self-training (Srivastava et al., 2014; Chen et al., 2020b; Sagawa et al., 2019; Chen et al., 2020b). In particular, prior art places a heavy emphasis on lossless access to the input data during training, then constructing a high-level representation which extracts the necessary information. Yet it is reasonable to assume that in some cases, we desire access to only a subset of the input which is relevant to the task – for example, the background color in an image of a “7” is unnecessary for identifying its digit class. The fundamental challenge, then, is discerning which parts of the input are relevant without requiring privileged information (e.g., the nature of the downstream task) at training time. Our approach, Contrastive Input Morphing (CIM), uses labeled supervision to learn input-space transformations of the data that mitigate the effect of irrelevant input features on predictive performance. Though the Data Processing Inequality (Cover, 1999) states that no amount of input processing can increase its mutual information (MI) with the predictive variable, we propose to transform the data in such a way that it makes it easier for the model to extract the relevant predictive information for the downstream task – that is, we attempt to increase the amount of usable information for our representations (Xu et al., 2020). We emphasize that our method does not assume access to the exact nature of the downstream task, such as attribute labels for rare subgroups. The key workhorse of CIM is an auxiliary network called the Transformation Network (TN). Leveraging ideas from neural style transfer (Gatys et al., 2015; Li et al., 2017b), the TN is trained via a triplet loss on feature correlation matrices (Schroff et al., 2015; Koch, 2015). Intuitively, this objective uses the shared information from competing classes (“negative examples”) as a proxy for spurious correlations, while leveraging the shared information within the same class (“positive examples”) as a heuristic for task-relevancy (Khosla et al., 2020). The framework for CIM is quite general: it is (1) complementary to MI-based representation learning techniques such as variational information bottleneck (VIB) (Alemi et al., 2016); and (2) can be used as a plug-in module for training any classifier. For the flowchart of the training procedure of the CIM refer to Figure 1. Empirically, we evaluate CIM on three settings that suffer from spurious correlations: classification with nuisance background information, out-of-domain (OOD) generalization, and improving accuracy uniformly across subgroups. In the first task, CIM outperforms ERM on colored MNIST and improves over the ResNet-50 baseline on the Background Challenge (Xiao et al., 2020). Similarly, CIM outperforms relevant baselines using ResNet-18 on the VLCS dataset (Torralba & Efros, 2011) for OOD generalization. For subgroup accuracies, CIM outperforms both supervised and unsupervised methods on CelebA (Liu et al., 2015) in terms of worst-group accuracy (by 1.7% and 41.4% respectively), while outperforming unsupervised methods by up to 12.9% on Waterbirds. In summary, our contributions in this work can be outlined as follows: 1. We propose CIM, a method demonstrating that lossy access to input data helps extract good task-relevant representations. 2. We show that CIM is complementary to existing methods, as the learned transformations can be leveraged by other MI-based representation learning techniques such as VIB. 3. We empirically verify the robustness of the learned representations to spurious correlations on a variety of tasks (Section 4). 2 PRELIMINARIES We consider the standard supervised learning setup where x ∈ X ⊆ Rd is the input variable, and y ∈ Y = {1, . . . , k} is the set of corresponding labels. We assume access to samples D = {(xi, yi)}ni=1 drawn from an underlying (unknown) joint distribution pdata(x, y), and use capital letters to denote random variables, e.g. X and Y . We use P (X,Y ) to denote their joint distribution as well as P (·) for the respective marginal (e.g. P (X) for the marginal distribution of X). 2.1 BACKGROUND AND PROBLEM SETUP Our goal is to learn a classifier fθ : X → Y , where fθ ∈ Θ achieves low error according to some loss function ` : Θ× (X × Y)→ R. Specifically, we minimize the empirical risk: Lsup(θ) = Ex,y∼pdata(x,y)[`(fθ(x), y)] ≈ n∑ i=1 `(fθ(xi), yi) (1) In addition to good classification performance, we aim to learn representations of the data, which: (a) are highly predictive of the downstream task; and (b) do not rely on spurious input features. That is, the learned representations should be task-relevant. Information bottleneck. A natural way to measure “task-relevance” in random variables is to consider the total amount of information that a compressed (stochastic) representation Z contains about the input variable X and the output variable Y . In particular, information bottleneck (IB) (Tishby et al., 2000; Chechik et al., 2005; Alemi et al., 2016) is a framework which utilizes mutual information (MI) to quantify this dependence via the following objective: min P (Z|X) I(X;Z)− βI(Z;Y ) (2) where β > 0 controls the importance of obtaining good performance on the downstream task. Given two random variables X and Y , I(X;Y ) is computed as DKL(P (X,Y )||P (X)P (Y )), where DKL denotes the Kullback-Leibler (KL) divergence between two probability distributions. The IB framework can be extended to account for additional sources of input data that is known to contain irrelevant information about the predictive task. This setting, known as IB with side information (Chechik & Tishby, 2003), adds a term in the IB objective, which simultaneously minimizes the MI between this nuisance variable and the learned representation. Concretely, given random variables (X,Y+, Y−) where Y+ denotes the task of interest and Y− denotes a spurious auxiliary variable, the objective becomes: min P (Z|X) I(X;Z)− β(I(Z;Y+)− γI(Z;Y−)) (3) where γ is another tunable hyperparameter for the nuisance task. We note that this framework bears resemblance to triplet-based losses such as (Schroff et al., 2015; Koch, 2015), as well as contrastive learning approaches that leverage MI maximization (Linsker, 1988; Hjelm et al., 2018; Oord et al., 2018; Tian et al., 2019; Khosla et al., 2020). It is also in line with the InfoMin principle suggested by (Tian et al., 2020), for learning good views in self-supervised contrastive learning. Although the MI framework is compelling, as it captures arbitrarily complex dependencies between random variables, there exist several challenges with their use in practice. The difficulty of computing MI in high dimensions, for example, is well-documented, demands the use of various neural estimators (Barber & Agakov, 2003; Gutmann & Hyvärinen, 2010; Oord et al., 2018; Belghazi et al., 2018; Poole et al., 2019; Song & Ermon, 2019). Additionally, approaches such as IB posit restrictive assumptions on the relationships between Y+ and Y−; namely, that they must be conditionally independent given X , which is difficult to enforce. 3 CONTRASTIVE INPUT MORPHING Motivated by the above challenges, we propose to approximate the information content between task-relevant and -irrelevant features via correlations in higher-dimensional feature spaces. This procedure helps our method learn the appropriate input-space transformations. 3.1 MEASURING RELEVANCE VIA CORRELATIONS Another way to measure “task-relevance” in random variables is to consider their conditional dependencies, as captured by their covariances. Specifically, consider a feature map φ : X → Rd that takes in an input and returns a representation φ(x) that is of the same dimensionality as the original input x. We can use this feature map to construct a covariance (Gram) matrix of φ(x), where ΣXX = φ(x)Tφ(x). Although the covariance only measures linear dependencies between the input, we can capture more complex relationships via an arbitrarily complex feature map φ. Training Procedure: For the Transformation Network (TN), we utilize a convolutional autoencoder to obtain a reconstructed image of the same dimensionality as the input, as shown in Figure 2. Our method operates over triplets: (x, x+, x−), where (x, x+) denote examples from the same class while x− is an example from a different class than x. We use a supervised contrastive loss to train the network, similar in spirit to (Khosla et al., 2020). Specifically, we learn an intermediate feature map φ : X → R(H×W )×C using the TN that takes in an input x and returns a representation φ(x) that is of the same dimensionality as the original input, where (H ×W ) denotes the height and width of the image, and C denotes the number of channels. We use this feature map to construct a Gram matrix of the input features, where ΣXX = φ(x)Tφ(x). Then, the triplet loss encourages the positive examples’ Gram matrix representations to move closer together in embedding space to those of the input, while ensuring that the negative examples’ representations are further apart: Lcon(φ) = min φ ||ΣXX ,ΣX+X+ ||2 −max(α, ||ΣXX ,ΣX−X− ||2) (4) for some margin α > 0. The output of φ(·) is then passed through a 1 × 1 2-D convolution layer with a sigmoid activation to produce a (single channel) soft “mask” m(x), which is then multiplied with the original input image x to obtain the final representation ψ(x) = x ◦ m(x). Finally, the classifier fθ(·)is trained on the transformed input image ψ(x). Learning Objective: The overall loss function can be written as: L(φ, θ) = λLcon(φ) + Lsup(θ) (5) where λ is a multiplier which controls the contribution of the TN loss from Lcon(φ) from Equation 4 and Lsup(θ) is the standard cross entropy loss for multi-class classification. The parameters for the transformation network (φ) and the classifier (θ) are trained jointly. In our experiments, we found that values of λ = 0.0001 worked well. It is well known that Lcon can be interpreted as minimizing a specific form of Maximum Mean Discrepancy (MMD) (Gatys et al., 2015; Li et al., 2017b). For the identity map φ(·), Equation 5 is equivalent to minimizing MMD between two kernelized inputs where the specific kernel is the second-order polynomial kernel. In this way, CIM’s Transformation Network can also be seen as minimizing the distance between the mean embeddings of the underlying distributions for X and X+ while simultaneously maximizing the distance for those of X and X−. 3.2 A MOTIVATING EXAMPLE We present a concrete example for the intuition behind our approach using the MNIST dataset (LeCun, 1998). We first construct a challenging input reconstruction task, in which a red square is placed on the bottom right of all samples and the model is trained to reconstruct a random digit that is different from the input’s class. As the TN is trained as an independent autoencoding module, we find that the TN learns to pick up shared signals across inputs (i.e., the black background) before converging to the red square as the source of shared (spurious) correlations among examples. Next, we evaluate whether we can remove this source of variation for digit classification by passing a lossy version of the input into the classifier (Figure 1). As shown in Figure 2 (bottom), the input transformation learned by the model (i.e. ψ(x) = m ◦ x) de-emphasizes the shared features while highlighting the task-relevant features. 4 EXPERIMENTAL RESULTS For our experiments, we are interested in empirically investigating the following questions: 1. Are CIM’s learned representations robust to spurious correlations in the input features? 2. Does the input transformation learned by CIM improve domain generalization? 3. How well can CIM preserve classification accuracy across subgroups? Datasets: We consider various datasets to test the effectiveness of our method. We first construct a colored variant of MNIST to demonstrate that CIM successfully ignores nuisance background information in a digit classification task, then further explore this finding on the Background Challenge (Xiao et al., 2020). Next, we evaluate CIM on the VLCS dataset (Torralba & Efros, 2011) to demonstrate that the input transformations help in learning representations that generalize to outof-domain distributions. Then, we study two benchmark datasets, CelebA (Liu et al., 2015) and Waterbirds (Wah et al., 2011; Zhou et al., 2017; Sagawa et al., 2019), to show that CIM preserves subgroup accuracies. Models: We use different classifier architectures depending on the downstream task. While ResNet50 is the default choice for most datasets, we also utilize Inception-ResNetV2 (Szegedy et al., 2016) to obtain better performance, ResNet-18 for a fair comparison with existing OOD generalization techniques, and PointNet (Qi et al., 2017) for 3D point cloud classification. We also experiment with Variational Information Bottleneck (VIB) (Alemi et al., 2016) as both a complementary and competing approach to CIM, and use ResNet-50 as the VIB encoder. We refer the reader to Appendix A.2 for additional details on model architectures and hyperparameters. We note that the transformed inputs and the feature maps Φ are semantically meaningful as shown in Figure 4. 4.1 CLASSIFICATION WITH NUISANCE BACKGROUND INFORMATION Colored MNIST: First, we assess whether CIM can distinguish between two MNIST digit classes (2 and 7) in the presence of a spurious input feature (background color). As outlined in Figure 3(a), we construct a dataset such that a classifier will achieve low accuracy by relying on background color. For a given proportion α, we color α% of all digits labeled “2” in the training set with blue backgrounds, and color the remaining (1−α)% labeled “7” with yellow backgrounds. We vary this proportion by α = {0.5%, 1%, 2%}. At test time, we color all the digits labeled “2” in blue, while coloring the “7” digits in yellow. As shown in Figure 3 (b), CIM is better able to utilize relevant information for the downstream classification task in comparison to ERM by 13%, 10.5%, and 3% on models trained with α = {0.5%, 1%, 2%} respectively. Perhaps more interestingly, a hybrid approach of VIB + CIM outperforms all other methods – this suggests that the input transformations learned by CIM are indeed preserving task-relevant information that can be better leveraged by InfoMax methods such as VIB. More experimental details can be found in Appendix A.2. The Background Challenge: Next, we evaluate whether the favorable results from MNIST translate to a more challenging setup, and test CIM on the Background Challenge (Xiao et al., 2020). The Background Challenge is a public dataset consisting of ImageNet-9 (Deng et al., 2009) test sets with varying amounts of foreground and background signals, designed to measure the extent to which deep classifiers rely on spurious features for image classification. As shown in Table 1, CIM outperforms the original ResNet-50’s performance by 4.1% on Mixed-rand, 0.8% on Mixed-same, and 0.5% on the original test set. Mixed-rand refers to the setting where the foreground is overlaid onto a random background, while Mixed-same corresponds to the test set where the foreground is placed on a background from the same class. These results demonstrate that CIM indeed learns task-relevant representations without relying on nuisance background information. 4.2 CIM GENERALIZES OVER DIFFERENT DOMAINS In this experiment, we evaluate CIM on OOD generalization performance using the VLCS benchmark (Torralba & Efros, 2011). VLCS consists of images from five object categories shared by the PASCAL VOC 2007, LabelMe, Caltech, and Sun datasets, which are considered to be four separate domains. We follow the standard evaluation strategy used in (Carlucci et al., 2019), where we partition each domain into a train (70%) and test set (30%) by random selection from the overall dataset. As summarized in Table 2, CIM outperforms state-of-the-art methods based on ResNet-18 on each domain, bolstering our claim that using a lossy transformation of the input is helpful for learning task-relevant representations that generalize across domains. 4.3 CIM PRESERVES SUBGROUP PERFORMANCE In this experiment, we investigate whether representations learned by CIM perform well on all subgroups on the CelebA and Waterbirds datasets. Preserving good subgroup-level accuracy is challenging for naive ERM-based methods, given their tendency to latch onto spurious correlations (Kim et al., 2019; Arjovsky et al., 2019; Sagawa et al., 2020; Chen et al., 2020b). Most prior works leverage privileged information such as group labels to mitigate this effect (Ben-Tal et al., 2013; Vapnik & Izmailov, 2015; Sagawa et al., 2019; Goel et al., 2020; Xiao et al., 2020). As TN in CIM is trained to capture task-relevant features and minimize nuisance correlations between classes, we hypothesize that CIM should perform well at the subgroup level even without explicit group label information. For a fair comparison with the prior work, we use ResNet-50 as the backbone classifier for the CIM , but also train both ERM and CIM with an Inception-ResNetV2 (Szegedy et al., 2016) backbone to assess the impact of using a larger model (denoted by ERM* and CIM*, respectively). We also use ResNet-50 for VIB’s encoder and InfoMask’s discriminator (see Appendix A.2). Table 3 shows that CIM outperforms both supervised and unsupervised methods on CelebA in terms of worst-group accuracy (2.4% improvement over CAMEL, the top-performing supervised model), and outperforms unsupervised models while significantly improving over ERM on the Waterbirds dataset (16.7% increase). We emphasize that the favorable performance of CIM is obtained without using subgroup labels, in contrast with previous approaches. We refer the reader to Appendix B.3 for further details and ablation studies regarding the different components of our method. 5 RELATED WORK Our work bridges several lines of work in contrastive learning and learning representations that are robust to spurious correlations. Contrastive representation learning. There has been a flurry of recent work in contrastive methods for representation learning, which encourages an encoder network to map “positive” examples closer together in a latent embedding space while spreading the “negative” examples further apart (Oord et al., 2018; Hjelm et al., 2018; Wu et al., 2018; Tian et al., 2019; Arora et al., 2019; Chen et al., 2020a). Included are triplet-based losses (Schroff et al., 2015; Koch, 2015) and noise contrastive estimation losses (Gutmann & Hyvärinen, 2010). In particular, recent work (Tian et al., 2020; Wu et al., 2020) has shown that minimizing MI between views while maximizing predictive information of the representations with respect to the downstream task, leads to performance improvements, similar to IB (Chechik & Tishby, 2003). While most contrastive approaches are selfsupervised, (Khosla et al., 2020) utilizes class labels as part of their learning procedure, similar to our approach. We emphasize that CIM is not meant to be directly comparable to the aforementioned techniques, as our objective is to learn input transformations of the data that are task-relevant. Robustness of representations Several works have considered the problem of learning relevant features that do not rely on spurious correlations with the predictive task (Heinze-Deml & Meinshausen, 2017; Sagawa et al., 2020; Chen et al., 2020b). Though (Wang et al., 2019) is similar in spirit to CIM, they utilize gray-level co-occurrence matrices as the spurious (textural) information of the input images, then regress out this information from the trained classifier’s output layer. Our method does not solely rely on textural features and can learn any transformation of the input space that is relevant for the downstream task of interest. Although CIM also bears resemblance to InfoMask (Taghanaki et al., 2019), our method is not limited to attention maps. (Kim et al., 2019) uses an MI-based objective to minimize the effect of spurious features, while (Pensia et al., 2020) additionally incorporates regularization via Fisher information to enforce robustness of the features. On the other hand, CIM uses an orthogonal approach to learn robust representations via higher-order correlations in the features. Information in representations There is a rich body of work which focuses on quantifying the amount of information necessary to perform well on a downstream task (Achille & Soatto, 2018). CIM is reminiscent of InfoMax (Linsker, 1988) and IB-based approaches (Tishby et al., 2000; Alemi et al., 2016) which propose to maximize the MI in the learned representations with the predictive random variables. In particular, (Chechik & Tishby, 2003; Chechik et al., 2005; Goyal et al., 2020) is most similar to our setup where they consider additional (nuisance) predictive information. Rather than using MI, we draw inspiration from the style transfer literature (Gatys et al., 2015; Li et al., 2017b; Krichene et al., 2018; Sastry & Oore, 2019) to compare correlations between feature activations of relevant versus irrelevant examples during training. 6 CONCLUSION In summary, we considered the problem of extracting representations with task-relevant information from high-dimensional data. We introduced a new framework, CIM, which learns input-space transformations of the data via a triplet loss to mitigate the effect of irrelevant input features on downstream performance. Through experiments on (1) classification with nuisance background information; (2) OOD domain generalization; and (3) preservation of uniform subgroup accuracy, we showed that CIM achieves good performance despite the presence of spurious correlations in the data and outperforms most relevant baselines. Additionally, we demonstrated that CIM is complementary to other representation learning frameworks such as VIB. For future work, it would be interesting to test different types of distance metrics for the triplet loss, to explore whether CIM can be used as an effective way to learn views for unsupervised contrastive learning, and to investigate label-free approaches for learning the input transformations. A ADDITIONAL EXPERIMENTAL DETAILS A.1 ARCHITECTURES In Figure 5, we show the detailed TN architectures used for RGB and point-cloud data. A.2 HYPERPARAMETER CONFIGURATIONS AND TRAINING DETAILS Variational Information Bottleneck (VIB). We used ResNet-50 as the encoder in VIB because most methods we compare CIM with are based on ResNet-50. We tested two different settings for VIB after the encoder: (a) apply KL regularization on encoder’s last layer Lf of size (1, 2048) and compute the cross-entropy loss on the regularized feature vector; (b) apply KL on the feature vector similar to (a), but add 3 fully connected layers of (1024, ReLu, batch normalization), (512, ReLu, batch normalization), and (256, ReLu), then calculate the cross-entropy loss; (c) add a fully connected layer of size 512 after Lf , then follow the steps as in (a). For colored MNIST we used architecture (c) and trained the model using Adam optimizer with a learning rate set to 0.0001 and batch size of 64. For celebA and Waterbirds, we used architecture (b) with Adam optimizer and learning rate of 0.001 and batch size of 64. For all the above experiments we set the weight for KL regularization term to be 0.001 and the standard deviation of to be 0.1. InfoMask. We used the default architecture (Taghanaki et al., 2019) except for changing the encoding part to be ResNet-50. For celebA experiments, we used Adam optimizer with a learning rate of 0.0001 and a batch size of 32. For Waterbirds, we trained the model using SGD optimizer with a learning rate of 0.001 and a momentum of 0.9. Similar to VIB, we set the KL term weight to be 0.001 and the standard deviation of to be 0.1. We tested different threshold values for the masking function and obtained the best results with just soft masking i.e. when the threshold is set to zero. Point Cloud Experiments. For PointNet, we used Adam optimizer with a learning rate of 0.0001 and a batch size of 32. We trained both the original and CIM based model with rotated and jittered input data. Colored MNIST. We resized images to (64 × 64 × 3) and trained all the models using Adam optimizer with a learning rate of 0.0001 and batch size of 64. For VIB, we set the KL divergence contribution weight to 0.001. Domain Generalization. We use ResNet-18 as the backbone to make a fair comparison with stateof-the-art. We train CIM using Adam optimizer with learning rate of 0.0001 and batch size of 64. We use the same training and test splits as those used in the work with (Carlucci et al., 2019). For CIM-based models, we set λ = 0.0001 and other hyper-parameters are summarized in Table 4. To control the level of input re-weighting, we minimize negative entropy on m with a Lagrangian multiplier ζ = 0.00001. B ADDITIONAL EXPERIMENTAL RESULTS B.1 BACKGROUND CHALLENGE We include for completeness the entirety of the results from (Xiao et al., 2020). We note that our results are not directly comparable with those from other architectures (e.g. WRN-50x2), as we used ResNet-50 as our base classifier. B.2 3D POINT CLOUD CLASSIFICATION In Table 6, we report the classification results on normal and rotated objects. As the first row of the table summarizes, PointNet performs well on average on the 40 classes. However, when we increase spurious correlations by rotating the objects, class-wise accuracies significantly drop, resulting in a 16.1% performance degradation in the average accuracy of the model (second row). After applying CIM, the spurious correlation between different categories is reduced, thus class-wise accuracy of challenging objects is improved (third row). B.3 ABLATION STUDIES We construct an ablation study on the CelebA dataset to study the effects of the Gramian-based contrastive loss. As shown in Table 7, we find that learning a simple attention-like weighting matrix without any regularization performs better than ERM. We also observed that having both positive and negative samples in the TN’s loss function performs better compared to having only positives or negatives. It is worth mentioning the negative samples have a greater impact on the performance in comparison to the positivies.
1. What is the main contribution of the paper, and how does it differ from other attention-based mechanisms in computer vision? 2. How effective is the contrastive loss adopted in the paper, and how does it improve the performance of the model? 3. Are there any limitations or areas for improvement in the experimental results presented in the paper? 4. How does the number of channels in the representation \Phi affect the performance of the method? 5. Can you provide more clarity on the training procedure and the notation used in Equation 4? 6. Is there a connection between the first line in Figure 2 and the procedure described in Equation 4? 7. How does the proposed method improve the generalization when learning on a domain and testing on other domains? 8. What are the implications of the improvements in the worst group accuracy, and how do they relate to the best-group accuracies?
Review
Review The paper proposes an approach to mask parts of the input samples, so as to train NN models that are more robust to spurious background information and/or domain changes. The proposed architecture is quite similar to the ones implementing the attention mechanisms in many fields of computer vision (e.g. segmentation, or re-ID). The architecture presented in Figure 1 indeed involves a side NN branch to predict a single channel (soft) mask, in charge of weighting the input sample features. Here, the input channels are directly weighted by the mask (while weighting might affect arbitrary intermediate features in the more general attention-based mechanisms). The main originality of the paper lies in the contrastive loss adopted to constraint the representation \Phi used to derive the mask (see Fig1). The experimental results are convincing. However, it is not clear that the benefit demonstrated in these experiments are due to the contrastive input morphing, as claimed in the paper title. This is because no result is presented to discuss how the model behaves when \lambda is set to zero, i.e. when the contrastive is not exploited and only the attention mechanism is implemented. Hence, it is not possible to conclude that the benefit observed in the experimental results is related to the contrastive nature of the transformation learnt by the side branch. Those additional results are definitely needed to support the claim made about contrastive learning, and would change the score from 'marginally below' to 'marginally above'. (or even to 'Good paper, accept' if some of the clarity issues pointed below are solved). A number of additional issues, whilst less critical, probably also deserve a deeper investigation: • Why does the representation \Phi considers the same number of channels as the inputs? Do the performance of the method change when changing the number of channels C in \Phi ? • The training procedure description lacks of clarity. Defining the dimension of involved variables would certainly help. Moreover, it is not clear how the terms in Eq(4) should be interpreted: what does || M , M* ||^2 denote, when M and M* are two matrices ? • Is there a link between the first line in Figure 2, and the procedure described in Eq(4) ? or does this line just correspond to the training of an autoencoder without being connected to the contrastive loss ? • In Section 4.2, the experiment demonstrates that training a model with the proposed attention-based mechanism improves performance compared to other models in the particular case of a dataset merging multiple domains. However, the methodology adopted in this section does not demonstrate that the proposed method helps in improving the generalization when learning on a domain and testing on other domains. This second question would be more relevant to address in the experiments. • In Section 4.3, end of first paragraph, should ‘group’ be replaced by ‘subgroup’ in ‘…well the subgroup level even without explicit group label information’? In Table 3, worst group accuracy is significantly improved. What about best-group accuracies?
ICLR
Title Early-Stopping for Meta-Learning: Estimating Generalization from the Activation Dynamics Abstract Early-stopping, a fundamental element of machine learning practice, aims to halt the training of a model when it reaches optimal generalization to unseen examples, right before the overfitting regime on the training data. Meta-Learning algorithms for few-shot learning aim to train neural networks capable of adapting to novel tasks using only a few labelled examples, in order to achieve good generalization. However, current early-stopping practices in meta-learning are problematic since there may be an arbitrary large distributional shift between the meta-validation set coming from the training data, and the meta-test set. This is even more critical in few-shot transfer learning where the meta-test set comes from a different target dataset. To this end, we empirically show that as meta-training progresses, a model’s generalization behaviour on a target distribution of novel tasks can be estimated by analysing the dynamics of its neural activations. We propose a method for estimating optimal early-stopping time from the neural activation dynamics of just a few unlabelled support examples from the target distribution, and we demonstrate its performance with various meta-learning algorithms, few-shot datasets and transfer regimes. 1 INTRODUCTION Deep Learning research has been successful at producing algorithms and models that, when optimized on a distribution of training examples, generalize well to previously unseen examples drawn from that same distribution. Meta-Learning is in a way, a natural extension of this aim, where the model has to generalize to not only new data points, but entirely new tasks. Important practical progress has been made in this direction over the past few years. Yet it remains sparsely understood what are the underlying phenomena behind the transitioning of a neural network’s generalization to novel tasks, from the underfitting to the overfitting regime, with the optimal generalization happening in between. Early-stopping, a fundamental element of machine learning practice, maximizes generalization by aiming to halt the training at the frontier between those two regimes, when generalization is optimal. It is computed on a validation set, made of held out examples from the training data, which serves as a proxy for the test data. As a regularizer, ”Early-stopping should almost be used universally. [...] It is probably the most commonly used form of regularization in deep learning. [...] a very unobtrusive form of regularization, in that it requires almost no change in the underlying training procedure” (Goodfellow et al., 2016). However in meta-learning, implementing early-stopping is problematic since there may be an arbitrarily large distributional shift between the meta-validation tasks (drawn from the training data) and the meta-test tasks. Moreover, meta-learning typically involves learning a new task from very few labelled examples, too few to allow constituting a validation set from it. In this work, we study the relation between generalization in Meta-Learning and neural activation dynamics : Given a neural network and a set of input examples, the network’s responses measured at all of its hidden-layers are what we define as the neural activations, and the evolution of those responses during the learning time (meta-training) is what we define as the neural activation dynamics. The main contributions of our work can be summarized as follows : 1. We empirically show that in Meta-Learning, a simple function of the neural activation dynamics, for just a few unlabelled target examples, can reveal the variation of generalization to a distribution of novel target tasks (Sec.2.2), and how this function can be learned (Sec.3). 2. We propose a novel method for early-stopping in Meta-Learning, applied in many settings of Few-Shot Learning and Few-Shot Transfer Learning (Sec.5). 2 META-LEARNING AND FEW-SHOT CLASSIFICATION Meta-Learning algorithms generally aim to train a model f(x; θ) on a set of source problems, often presented as a distribution over tasks p(Ttrain), in such a way that the model is capable of generalizing to new, previously unseen tasks from a target distribution p(Ttarget). When applied to classification, meta-learning has often been formulated in the past by defining a task T that involves the m-way classification of input examples x among m distinct classes. The tasks from p(Ttrain) and p(Ttarget) are made of classes drawn from two disjoint sets Ctrain and Ctarget. A novel task thus involves new classes not seen during training. In few-shot learning, the inputs x of the training and target tasks come from a same input distribution p(x) (e.g., an image dataset) but conditioned on their respective classes, i.e. p(xtrain) = p(x|y ∈ Ctrain) and p(xtarget) = p(x|y ∈ Ctarget). The few-shot aspect means that for a given novel task Ttarget, only a very few labelled examples are available, typically k examples per class, and the model uses this support set of examples S = {(x,y)}1..k to adapt its parameters θ to the task, then its accuracy is evaluated on new query examples from Ttarget. The meta-learning generalization Acctarget, for a model f(x; θt) at time t (after t training iterations) to a distribution p(Ttarget), is thus the query accuracy averaged over multiple target tasks: Acctarget . = ETi∼p(Ttarget) [ E(x,y)∼Ti\Si [ 1{argmax(f(x; θit)) = y} ]] (1) where for each new task Ti the adapted solution θit is often obtained by performing T steps of gradient descent (full-batch) on the cross-entropy loss L(f,Si) with respect to θt. In few-shot transfer learning, not only are the class sets Ctrain and Ctarget disjoint, but the marginal p(xtarget) can be arbitrarily different from p(xtrain) (e.g. from a different image dataset). 2.1 EARLY-STOPPING BASED ON VALIDATION SET PERFORMANCE CAN LEAD TO SUB OPTIMAL GENERALIZATION IN META-LEARNING In a standard supervised learning setup, a subset of examples is held out from the training data to constitute a validation set. Since the validation accuracy is a good proxy for the test accuracy, early-stopping is performed by halting training when the validation accuracy reaches its maximum. In Meta-Learning for few-shot classification, the validation set is made of held out classes from the training data to constitute the validation task distribution p(Tvalid), and early-stopping happens at t∗valid=argmaxtAccvalid. But this can lead to a sub-optimal generalization (see Fig.2) because of the potential distributional shift between p(Ttarget) and p(Tvalid) especially in few-shot transfer learning where it can be arbitrarily large. Estimating the out-of-distribution generalization Acctarget in Meta-Learning thus requires some minimal amount of information about p(Ttarget). However, the few-shot paradigm severely restricts the availability of data from p(Ttarget). The support examples from target tasks are accessible, but the model doesn’t control how many new tasks will actually be presented, there could be several thousands or very few. However, if there is a need to early-stop and generalize to some target task distribution p(Ttarget), then the model will need to solve, at the very least, a single task from p(Ttarget), and thus has access to at least a single support set S. We thus propose to only use a few examples, typically the support set of a single new task (e.g. 5 images). This also implies that any algorithm estimating the optimal early-stopping time t∗ should have a very low sample-wise (and task-wise) variance for its estimate of t∗. 2.2 CAN NEURAL ACTIVATION DYNAMICS FOR A FEW TARGET INPUTS ALLOW US TO MAKE INFERENCES ABOUT GENERALIZATION? . In this work we search for an observable property of deep neural networks that can help us make inferences about meta-learning generalization to a given target problem as training time t progresses. We thus hypothesize the existence of a function ψ of f(x; θt) and p(Ttarget) such that ψ(f, p(Ttarget), t) ∝ Acctarget(t), and set on to find ψ. More specifically, we want to estimate t∗ = argmaxtAcctarget(t) using only a few target examples (a single support set) when approximating ψ. To support a general statement on generalization in Meta-Learning and the nature of ψ, we conduced experiments across a wide range of meta-learning settings. We used different meta-learning algorithms, three of the most pivotal ones of the field : MAML (Finn et al., 2017), Prototypical Networks (Snell et al., 2017), and Matching Networks (Vinyals et al., 2016). We considered both the few-shot learning and few-shot transfer learning regimes, with 1-shot and 5-shot experiments, and various few-shot datasets for p(Ttrain) and p(Ttarget), such as MiniImagenet and Omniglot, but also many others included in Meta-Dataset (Triantafillou et al., 2020). We also used different architectures : the standard 4-layer CNN proposed by (Vinyals et al., 2016), as well as a ResNet as used in (Triantafillou et al., 2020). For full experimental details, refer to Appendix A. Here we present the experimental results that progressively suggest that variation of generalization can be efficiently estimated from simple metrics on the neural activation dynamics : Observation 1: For a deep neural network, the variation of target generalization, as a function of training time, frequently correlates with simple statistics characterizing how its feature extractor responds to the target input distribution: In many meta-learning settings we observed that Acctarget is proportional to relatively simple metrics (denoted as ψ1). One such metric is the expected inner product between representations: ψ1(ϕ(X)) . = Exi,xj∼p(x)[ϕ(xi) Tϕ(xj)], (2) where ψ1 is measured at the output of the feature extractor ϕ, where f(x) = g(ϕ(x)), and captures both the similarity among representation vectors and their norm. Moreover, we measure ψ1 on the representations of the target inputs Xtarget, before adapting the model to new tasks. This relation seems approximately independent of the target class identities Ytarget, and predominantly depends on how the feature extractor represents the marginal distribution over the input Xtarget of the target problem, i.e. ψ1(ϕ(Xtarget), t) ∝ Acctarget(t). Example in Fig.4, complete results in App.B.2.1. Figure 4: Average target task accuracy as a function of training iteration: Acctarget(t). Observation 1: The variation of generalization (Acctarget), for a deep neural network, frequently correlates with simple statistics characterizing how its feature extractor ϕ responds to the target input distribution p(xtarget). For example, here we show ψ1, which is simply the expected inner product between individual representation vectors, which follows the same trend as Acctarget(t) and peaks roughly at the same time. Here ψ1 is vertically rescaled to match the range of the target accuracy. See App.B.2.1 for full experiments across multiple settings. Observation 2: A simple statistic on the neural activations, if computed at the right layer of a network, can often strongly correlate with generalization, but this layer may change depending on the setting. In a deep neural network a feature extractor is composed ofL hidden-layers: ϕ(x) = (ϕL◦ϕL−1◦...◦ϕ1)(x). In many settings, ψ1 measured at the last layer ϕL isn’t proportional to Acctarget, but the relation instead occurs at a lower layer ϕl. Thus, rather than just examining the last layer representation dynamics, we often need to consider the neural activations of the whole feature extractor (See Fig. 3 and Eq. 3). More precisely, we shall consider the evolution throughout time or the neural activation dynamics of a network, i.e.: ψ(Φ(Xtarget, t)) ∝ Acctarget(t), or expressed in the form of Eq. 4. See an example at Fig.5, or App.B.2.2 for full experiments across multiple settings. Φ(X) . = {ϕl(X) | l ∈ [1..L]} (3) Φ(X, t) . = Φ(X | θt) (4) Observation 3: Simple statistics of the activations correlate to generalization, but they may change. One needs to find the right statistic depending on the setting. In our experiments we observe that for many settings Acctarget(t) doesn’t consistently correlate with ψ1(ϕl(Xtarget)) at any specific layer l. From this we conjectured that perhaps ψ1 is a special case in a more general function space Ψ, a hypothesis space or set of functions that predict generalization, or more formally : Ψ .= {ψ | ψ(Φ(Xtarget, t)) ∝ Acctarget(t)} where |Ψ| > 1. From this perspective we ultimately care about finding the function ψ in Ψ that, given the meta-learning setting involved, minimizes the true objective d defined below : d = max t (Acctarget(t))−Acctarget(argmax t ψ(Φ(Xtarget, t))) (5) The natural question that follows is, what may be the characteristics of such function space Ψ ? We address this by formulating a few inductive biases and assumptions which then inform our subsequent experiments. We first note that the complexity of Ψ must be large enough so that, in most meta-learning settings in the few-shot regime, Ψ contains a good solution function ψ∗ such that d is low. The complexity shouldn’t be too large either, since to find ψ∗ we will optimize an indirect empirical objective d̂ (Sec.3). This is especially important in few-shot transfer learning. Furthermore, since Φ(Xtarget) itself has a probability distribution, our hypothesis space Ψ should be a set of functions ψ that are sample estimators of some population statistics of the distribution of Φ(Xtarget). However, since we only have access to a very few samples, those statistics should be relatively simple so as to keep down the standard error of their estimators. We propose to use descriptive statistics based on moments, and limit them up to the second-order (higher-order moments are harder to estimate accurately). Finally, since we ultimately need to find a one-dimensional curve ψ∗(t) to compare to Acctarget(t), our hypothesis space Ψ should contain scalar-valued functions, which we get by computing moments on norms of the activation vectors. In our experiments we observe that when Acctarget(t) doesn’t consistently correlate with ψ1(ϕl(Xtarget)) at any specific layer l, it does typically correlate with one of the following alternative metrics : the norm of activations ψ2; the dispersion of activations ψ3; or the feature-wise variance of activations ψ4. We have observed that generalization sometimes actually correlate with the negative (i.e. −ψ) of either of ψ1 to ψ4. ψ2 . = Ex[‖ϕl(x)‖22] (6) ψ3 . = Ex[‖ϕl(xi)−ϕl(xj)‖22] (7) ψ4 . = Ex[Vark(ϕl(xi,k))] (8) All the metrics ψ1 to ψ4 and their negatives can actually be expressed by a linear combination of the following moments (assuming ReLU activation functions) : m1 = 1 n n∑ i=1 ‖ϕl(xi)‖21 m2 = 1 n n∑ i=1 ‖ϕl(xi)‖22 m3 = 1 n2 n∑ i=1 n∑ j=1 ‖ϕl(xi)− ϕl(xj)‖22 (9) such that those moments define the function space Ψ = {ψ(ϕl(X);w) | w ∈ R3, l ∈ [1..L]} where ψ(ϕl(X);w) = w1m1 + w2m2 + w3m3 and w = [w1, w2, w3] ∈ R3. This parametric function space Ψ, while being relatively simple, can express a variety of properties of activations, such as their norm, dispersion, feature-wise variance, inner product, positively or negatively, or even a combination of properties. In Tab.3 of App. B.3 we have experimentally verified that Ψ has enough complexity to contain a good solution function ψ∗, for many meta-learning settings, both in few-shot learning and few-shot transfer learning.Observation 4: The Variation of generalization can be estimated by using just a few target input examples: Given a function ψ(ϕl(Xtarget)) which correlates with generalization Acctarget(t). when ψ is measured on the activation dynamics for just a single unlabelled support set S, the estimated early-stopping time t∗ψ = argmaxt ψ(t) typically shows very low variance with respect to which task is used for the estimation (Fig. 7). We conjecture that this might be due lack of dependency of ψ on Ytarget, where ψ a more general property of the activations for p(xtarget). This makes early-stopping from such function ψ practical. 3 INFERRING WHICH FUNCTION OF THE NEURAL ACTIVATION DYNAMICS CORRELATES TO GENERALIZATION, AND AT WHICH LAYER TO MEASURE IT Our results in Sec.2.2 suggest that in Meta-Learning there exists a function ψ that, when measured on the neural activation dynamics Φ(Xtarget, t), closely relates to the target generalization Acctarget(t). However since this function is not unique and depends on the meta-learning setting involved (metalearning algorithm, neural architecture, training and target distributions, etc), we propose to cast the discovery of ψ as a machine learning problem. See Fig. 8, which schematizes our framework. At this point we know that, given a meta-learning setting, our function space Ψ should contain a good solution ψ∗ such the true objective d is low. Now we need a way to actually find ψ∗. We can do so by optimizing an indirect, empirical objective d̂, defined below. 3.1 Few-Shot Learning (FSL) : Inferring ψ∗ and l∗ from the validation dynamics and accuracy In few-shot learning, novel tasks from p(Ttarget) involve previously unseen classes but the input domain of Xtarget can be assumed to be similar to that of Xtrain, and therefore to that of Xvalid. We thus use the dynamics Φ(Xvalid, t) and the validation accuracy Accvalid (as a proxy for Acctarget) in order to learn the optimal function ψ∗ and the layer l∗ where it should be measured, and we do so by minimizing the empirical objective d̂FSL (Eq.10). We then compute our actual early-stopping time estimate t̂∗FSL when ψ ∗(ϕl∗(Xtarget, t)), measured on the few support input examples of a single target tasks, reaches its peak (Eq.11). d̂FSL = max t Accvalid(t)−Accvalid(argmax t ψ(ϕl(Xvalid, t);w))) (10) t̂∗FSL = argmax t ψ(ϕl∗(Xtarget, t);w ∗) where w∗, l∗ = argmin w,l d̂FSL (11) 3.2 Few-shot transfer learning (FSTL) : Meta-overfitting often happens when the target dynamics diverge from those of the source input domain When the target problem is from an entirely new dataset, we can’t use Accvalid as a proxy for Acctarget, and we need another objective function to learn ψ∗. However, we can learn ψ∗ by analyzing Φ(Xtarget, t), the neural activation dynamics of the target domain, and comparing them with Φ(Xvalid, t). Assume that for a given target problem, optimal generalization doesn’t happen at the same time as for the source domain, i.e., t∗ 6= t∗valid, and more precisely, assume t∗ < t∗valid. Typically, a generalization curve is generally increasing between t0 and its maximum, whereas it is generally decreasing after the maximum. This implies that the curves of Acctarget(t) and Accvalid(t) are positively correlated between t0 and t∗, as they are both increasing, whereas they are negatively correlated between t∗ and t∗valid, since Acctarget(t) is decreasing while Accvalid(t) is still increasing. In a sense, the two generalization behaviors “diverge” at t∗, since at that moment their correlation goes from positive to negative (See Fig.9a). Since here we assume the neural activation dynamics can characterize the generalization behavior of a model, we conjecture that Φ(Xtarget, t) and Φ(Xvalid, t) might also “diverge” at t∗, under some function ψ(ϕl∗(X, t);w∗), such that the sample Pearson correlation r, of ψ(Φ(Xtarget, t),w∗) and ψ(Φ(Xvalid, t),w∗) also goes from positive to negative near t∗ (See Fig.9b). Our experiments indeed suggest that functions ψ exhibiting more divergence are more likely to capture generalization to the target problem. This analysis can be found in App.***. We thus search for the weights w∗ and hidden-layer l∗ so as to observe the most negative correlation between ψ(ϕl(Xtarget, t);w) and ψ(ϕl(Xvalid, t);w) in the time interval [t0, t∗valid] (Eq.12). We then estimate t∗ by finding the time t̂∗FSTL when ψ ∗ target(t) and ψ ∗ valid(t) diverge (Eq.13). See Fig.10a,10b for a demonstration. d̂FSTL = r(ψtarget(t), ψvalid(t)) = ∑ t(ψtarget(t)− ψ̄target(t))(ψvalid(t)− ψ̄valid(t))∑ t(ψtarget(t)− ψ̄target(t))2 ∑ t(ψvalid(t)− ψ̄valid(t))2 (12) t̂∗FSTL = argmax t ( t× r ( ψ∗target, ψ ∗ valid, [t0, t < t ∗ valid] )) (13) with shorthand notations ψtarget(t) . = ψ(ϕl(Xtarget, t);w) and ψvalid(t) . = ψ(ϕl(Xvalid, t);w) and ψ̄(t) denotes an average over t, and ψ∗ .= ψ(ϕl∗(·);w∗). Here again we minimize an empirical objective, and w∗, l∗ = argminw,l .d̂FSTL. 4 RELATED WORK In recent years, some works have started to analyze theoretical aspects of gradient-based metalearning. (Finn et al., 2019) examine the online Meta-Learning setting, where in online learning the agent faces a sequence of tasks, and they provide a theoretical upper bound for the regret of MAML. (Denevi et al., 2019) study meta-learning through the perspective of biased regularization, where the model adapts to new tasks by starting from a biased parameter vector, which we refer in this work as the meta-training solution. For simple tasks such as linear regression and binary classification, they prove the advantage of starting from the meta-training solution, when learning new tasks via SGD. They use an assumption on the task similarity where the weight vectors parameterizing the tasks are assumed to be close to each other. Working in the framework for Online Convex Optimization where the model learns from a stream of tasks, (Khodak et al., 2019) make an assumption that the optimal solution for each task lies in a small subset of the parameter space and use this assumption to design an algorithm such that the “Task-averaged-regret (TAR)” scales with the diameter of this small subset of the parameter space, when using Reptile (Nichol et al., 2018), a first-order meta-learning algorithm. Bearing a stronger relation to our approach, (Guiroy et al., 2019) empirically study the objective landscapes of gradient-based meta-learning, with a focus on few-shot classification. They notably observed that average generalization to new tasks appears correlated with the average inner product between their gradient vectors. In other words, as gradients appear more similar in inner product, the model will, on average, better generalize to new tasks, after following a step of gradient descent. More recently, a few works have studied the properties of the feature extractor ϕ in the context of Meta-Learning. Notably, the authors of (Raghu et al., 2019) showed empirically that when neural networks adapting to novel task, in the few-shot setting with MAML and MiniImagenet, the feature extractor network is approximately invariant, while the final linear classifier undergoes significant functional changes. They then performed experiments where ϕ is frozen at meta-test time, while only the classifier g is fine-tuned, and observed very similar generalization performance to the regular fine-tuning procedure. Intuitively, these results suggest that the variation, of generalization along meta-training time t, might be predominantly driven by some evolving but unknown property of the feature extractor. The authors of (Goldblum et al., 2020) observed that generalization in few-shot learning was related to how tightly embeddings from new tasks were clustered around their respective classes. However, the authors of (Dhillon et al., 2019) observed that the embeddings at the output of ϕL were poorly clustered around their classes, but that clustering was important when measuring the logit outputs of g. This is similar to what the authors (Frosst et al., 2019) observed when dealing with new Out-of-Distribution examples. This suggests that if generalization is related to a property of the feature extractor, this property might be class agnostic. This is also something that we observed in our very early experiments (expected inner product between representation vectors strongly correlated with generalization, irrespective of taking the class identities into account). But in our work we observed that this property might not only depend on the output of the feature extractor. Earlier works demonstrated that in transfer learning, intermediate layers of ϕ might be critical in the ability of the model to transfer knowledge (Yosinski et al., 2014). 5 EARLY-STOPPING FOR META-LEARNING BY ANALYZING THE NEURAL ACTIVATION DYNAMICS OF A FEW TARGET INPUT EXAMPLES Here we present experimental results on the performance of our early-stopping method. For each experiment, we only use the unlabelled input examples from the support set of a single target task to evaluate the neural activation dynamics. At the beginning of an experiment, we thus randomly sample a task Ti from p(Ttarget) and only keep its set of support input examples. We repeat the experiment for multiple (50) independently and identically distributed support sets from p(Ttarget), and take the average performance. Each such experiment is then repeated for 5 independent training runs. As a baseline for comparison, we use the validation early-stopping approach. Since ψ1 to ψ4 work in practice, we will use them as our function space Ψ but the method that we develop applies as well to the continuous function space defined above, and we present some experimental results in App.B.5 where we apply our early-stopping method with the continuous function space. We begin by demonstrating our proposed early-stopping method in few-shot transfer learning, across various target dataset, and present the results in Tab.1. We use the standard 4-layer CNN architecture, with MAML, trained on MiniImagenet 5-way 1-shot. When the target dataset is Omniglot, the performance of the validation baseline (51%) is significantly lower than the optimal generalization (76%) presumably because of the distributional shift between MiniImagenet and Omniglot. In such scenario our method appears to offer a significant advantage over the baseline, since we obtain 75% in target accuracy, quite close to the optimal generalization. In scenarios where the target domain is arguably more similar to that of the source domain, e.g. transfer from MiniImagenet to Imagenet, the early-stopping from the validation accuracy yields a performance (35.0%) closer to the optimal generalization (35.6%), and in such case our method performs only slightly worse (34.8%) than the validation baseline. We observe a similar trend when the model is trained on the Quickdraw dataset : When transferring to Omniglot, the validation baseline leads to sub-optimal generalization, but estimating the target accuracy from the neural activation dynamics allows us to halt the training close to the optimal time. When transferring to Traffic Sign, the baseline performance yields reasonable performance, and our method is roughly on par with it. From this point, we will focus on settings where there is a significant gap in performance between the validation baseline and optimal generalization, for example the transfer from Birds to Quickdraw, the present in our illustration of Sec.2.1. Next we present similar experiments with two other meta-learning algorithms : Prototypical Networks and Matching Networks, which are shown in Tab.2. 6 CONCLUSION In this work we have presented empirical evidence that the overfitting point of Meta-Learning for deep neural networks for few-shot classification can often be estimated from simple statistics of neural activations and how they evolve throughout meta-training time. Our results suggest that key properties, or statistics of how feature extractors respond to the target input distribution can be found which are simple enough to be estimated from just a few unlabelled target input examples. However, the specific function of the activations, and the layer at which to measure them, need to be inferred. We demonstrate that these functions and layers of interest can be inferred and used to guide early stopping – leading to a new, and effective method for early stopping which represents a significant departure for the de facto standard practice of using a validation set. In few-shot learning these ingredients can be inferred from how the neural activation dynamics of the validation data relate to the validation accuracy. In few-shot transfer learning, they are inferred through searching for which function (in a given function space) and at which layer, that the activation dynamics of the target input domain “diverge” the most from those of the source domain. Finally, we have demonstrated how this approach can be used to optimize for target generalization in practice to perform early-stopping and thus improve overall generalization to distributions of novel few-shot classification tasks, while only using unlabelled support examples from a single target task. A EXPERIMENTAL DETAILS CNN : We use the architecture proposed by Vinyals et al. (2016) which is used by Finn et al. (2017), consisting of 4 modules stacked on each other, each being composed of 64 filters of of 3 × 3 convolution, followed by a batch normalization layer, a ReLU activation layer, and a 2 × 2 max-pooling layer. With Omniglot, strided convolution is used instead of max-pooling, and images are downsampled to 28 × 28. With MiniImagenet, we used fewer filters to reduce overfitting, but used 48 while MAML used 32. As a loss function to minimize, we use cross-entropy between the predicted classes and the target classes. ResNet-18 : We use the same implementation of the Residual Network as in (Triantafillou et al., 2020). For most of the hyperparameters, we follow the setup of (Triantafillou et al., 2020), but we set the main few-shot learning hyperparameters so as to follow the original MAML setting more closely, and in each setting, we consider a single target dataset at a time, with a fixed number of shots and classification ways. We use 5 steps of gradient descent for the task adaptations, 15 shots of query examples to evaluate the test accuracy of tasks. We don’t use any learning rate decay during meta-training, and step-size of 0.01 when finetuning the models to new tasks. Datasets : We use the MiniImagenet and Omniglot datasets, as well as the many datasets included in the Meta-Dataset benchmark (Triantafillou et al., 2020). B COMPLETE EXPERIMENTAL RESULTS B.1 THE ISSUE OF USING A VALIDATION SET FOR EARLY-STOPPING IN META-LEARNING B.2 THE RELATION BETWEEN THE NEURAL ACTIVATION DYNAMICS AND GENERALIZATION TO NOVEL TASKS B.2.1 RELATION BETWEEN THE REPRESENTATION SPACE OF THE FEATURE EXTRACTOR AND TARGET GENERALIZATION Here we present experimental results to support the observation 1 that we make in 2.2, showing that the variation of generalization along meta-training time can be captured by a function of the neural activation dynamics that is independent of class labels. B.2.2 NEURAL ACTIVATION DYNAMICS : DIFFERENT LEVELS OF THE FEATURE EXTRACTOR CAN REVEAL THE VARIATION OF GENERALIZATION B.2.3 DIFFERENT FUNCTIONS OF THE NEURAL ACTIVATION DYNAMICS CAN REVEAL THE VARIATION OF GENERALIZATION By expending the experimental setup further, we observed instances where a given metric had strong correlation with generalization but in a negative sense, i.e. that it was actually its argmin that coincided with optimal early-stopping time t∗. See Fig. 14 for examples of this phenomenon. We later observed that other statistical estimators can correlate with generalization. 0 10 20 30 40 50 t (iters) 0.54 0.56 0.58 0.60 0.62 0.64 0.66 Ac cu ra cy expected inner product expected square L2 norm expected square L2 dispersion Acctarget (a) Exclusive correlation between a specific metric and generalization 0 25000 50000 75000 100000 125000 150000 175000 t (iters) 0.30 0.35 0.40 0.45 0.50 0.55 Ac cu ra cy Acctarget(t) (t) (b) Expected l2 Norm 0 20000 40000 60000 80000 100000 120000 140000 t (iters) 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 Ac cu ra cy Acctarget(t) (t) (c) Expected l2 Dispersion 0 20000 40000 60000 80000 t (iters) 0.0 0.2 0.4 0.6 0.8 1.0 Ac cu ra cy Acctarget(t) (t) (d) Expected Feature-Wise Variance Figure 15: Different metrics of the representation space may have strong correlation with generalization, other than the expected inner product of Eq. 2). a) Prototypical, VGG Flower, 5-way 1-shot : out of three metrics which in other cases may be related with generalization (as in b), c), d) and Sec. ??), here only the expected l2 dispersion has a strong relation with generalization. b) Expected l2 norm (Eq. 6); c) Expected square l2 dispersion (Eq. 7, Prototypical Network, VGG Flower; d) Expected feature-wise variance (Eq. 8), Prototypical Network, Omniglot to Quickdraw. These results motivate our approach of considering a family of functions Ψ in which we must find the optimal function ψ∗ given the setting, rather than trying to discover a single universal metric that would correlate to generalization in all scenarios. Even if such metric exists, it may not be estimated with enough efficiency to satisfy the requirement of using only a single support set to estimate t∗. B.2.4 FUNCTIONS OF THE NEURAL ACTIVATION DYNAMICS : TASK-WISE VARIANCE OF THE ESTIMATE t∗ψ Here we present empirical results on the task-wise variance as discussed in the observation 4 of Sec. 2.2. We begin by showing the task-wise variance for few-shot accuracy when evaluated with a single target task and assuming access to the query examples (15 shots). Few-shot accuracy exhibits a high variance as different tasks will peak at much different times, making it unfit to estimate t∗. On the other hand, for the metrics from Sec. 2.2, which are based on small order statistics (mean and variance) the estimated early-stopping time exhibits drastically lower variance. See Fig. 16 for an example, where we use MAML in few-shot learning (5-way 1-shot) with the Aircraft dataset, and where we use the expected square l2 norm for the metric. As we can see, in Fig. 16, measuring the metric on different tasks merely offsets the response curve but bears almost no change on the trend of the curve itself. This also relates to our assumption that the variation of target generalization in Meta-Learning might be linked to a function of the neural activation dynamics that is class agnostic. B.3 CAPACITY OF THE CONTINUOUS FUNCTION SPACE Ψ (DEFINED BY THE THREE MOMENTS OF EQ.9) TO CONTAIN GOOD SOLUTIONS ψ∗ The momentsm1,m2 andm3 of Eq.9 define the parametric function space Ψ = {ψ(ϕl(X);w) | w ∈ R3, l ∈ [1..L]} where ψ(ϕl(X);w) = w1m1 + w2m2 + w3m3 and w = [w1, w2, w3] ∈ R3. This parametric function space Ψ. Here we have experimentally observed that Ψ has enough complexity to contain a good solution function ψ∗, for different meta-learning settings, both in few-shot learning and few-shot transfer learning, as shown in Tab.3. B.4 LEARNING ψ∗ IN FEW-SHOT TRANSFER LEARNING B.4.1 FINDING w∗ AND l∗ WHERE ψtarget(t) AND ψvalid(t) “DIVERGE” THE MOST As we added more experimental settings for Few-Shot Transfer Learning, we observed instances where, for a given metric measured at the representation space ϕL, there was no strong link with generalization, but when measuring the metric at lower hidden-layers than ϕL, then we observed a strong correlation with generalization. We illustrate this in Fig. 17. Theses results motivated our approach of considering the whole neural activation dynamics (all layers), rather than only those the final layer of the feature extractor alone, in our search for functions linked to generalization. Then in Fig.18 we conducted a more systematic analysis, but concerned with identifying the right functions ψ∗, with results suggesting that functions ψ showing stronger “divergence” (negative correlation) between the target and validation dynamics, will more likely lead to a higher target accuracy if we stop at their peak time. (a) Transfer : Omniglot to MiniImagenet, MAML. The critical depth, i.e. the one where measuring the expected inner product (here marked as RIP) predicts generalization on the target domain, is at layer 1, even though the critical depth for the source domain was at layer 4. (b) Transfer : MiniImagenet to Omniglot, MAML. The critical depth for the target domain is at layer 4, the same as for the source domain. Figure 17: Generalization can correlate to a metric at different levels of neural activations. Here the critical layer l∗ (squared in red) is identified by searching for the highest divergence between the validation and target neural activation dynamics. Correlation between D(ψtarget, ψtarget, ) and Generalization MAML Quickdraw ↓ Omniglot Prototypical Network Omniglot ↓ Quickdraw Matching Network MiniImagenet ↓ Omniglot 0.82 0.75 0.81 Table 4: Correlation betweenD(ψtarget, ψtarget, ) and Generalization, for different few-shot learning settings. The correlation is computed as in the analysis of Fig. 18c. The results show that functions exhibiting high divergence between the validation and target neural activation dynamics are likely to lead to good generalization performance on the target distribution. B.5 EVALUATING THE PERFORMANCE OF OUR EARLY-STOPPING METHOD WHEN USING THE CONTINUOUS FUNCTION SPACE Ψ DEFINED BY THE THREE MOMENTS OF EQ.9 Here we present a few experimental results where we apply our early-stopping methof in the continuous function space Ψ. Since there are only three weights to tune, namely w1, w2 and w3, we don’t suffer from the curse of dimensionality, which is the classic motivation for using gradient-based optimization of neural networks with many parameters. This allows for a search based optimization of w. 0.00 0.05 0.10 0.15 0.20 0.25 D( target, valid) 0.74 0.76 0.78 0.80 0.82 0.84 0.86 Pe rfo rm an ce Corr. coeff. R= 0.422 P-Value= 0.0 (a) Functions with high divergence between valid and target dynamics are more likely to achieve higher generalization 0.00 0.05 0.10 0.15 0.20 0.25 Avg Divergence (10 bins) 0.76 0.78 0.80 0.82 0.84 0.86 Av g Pe rfo rm an ce (1 0 bi ns ) (b) Average divergence vs. average performance 0.00 0.05 0.10 0.15 0.20 0.25 Avg Divergence (10 bins) 0.80 0.82 0.84 0.86 Av g Pe rfo rm an ce (1 0 bi ns ) Corr. coeff. R= 0.8255 P-Value= 0.0061 (c) Strong correlation between average divergence and average performance 0 5 10 15 20 25 30 35 t (iter) 0.75 0.80 0.85 0.90 0.95 1.00 Ac cu ra cy * target * valid Acctarget (d) Solution function measured on the valid and target examples Table 6: Performance of our method - Few-Shot Transfer Learning, Prototypical Network Algorithm Matching Network Source dataset MiniImagenet Target dataset Omniglot Baseline 73.75% Our method 75% Table 7: Performance of our method - Few-Shot Transfer Learning, Matching Network
1. What is the focus of the paper regarding preventing meta-level overfitting? 2. What are the strengths of the proposed approach, particularly in its motivation and performance improvement? 3. What are the weaknesses of the paper, especially in the experimental section? 4. Do you have any concerns regarding the effectiveness of the method in specific scenarios, such as few-task meta-learning?
Summary Of The Paper Review
Summary Of The Paper This paper propose a learnable early-stopping method for preventing meta-level overfitting. This is especially useful when there exists severe discrepancy between meta-train / meta-val / meta-test set. Specifically, they focus on empirical observations such that some of the simple statistics such as expected inner-product between pairs of samples is highly correlated to the meta-test performance. Based on some of such statistics, they propose learning to predict the optimal stopping time by minimizing a proxy objective with meta-validation set. The experimental results demonstrate the effectiveness of their method especially for meta transfer learning setting which reveals severe discrepancy between meta-val and meta-test set. Review Pros: The methodology is well motivated by empirical observations. It would have been better if more intuitions were provided on why such correlations take place, but anyway, it is always good to support one's claim based on such meaningful observations. The performance improvement agains the "validation baseline" is large, although not sure if it is statistically significant. Cons: [Major] The main weakness of this paper is on the experimental section. First of all, there is no meaningful baseline. While I'm not super familiar with this topic, I believe there exists tones of meta-level regularizations readily comparable to this baseline. I understand the authors may argue that it is too much to compare with all the existing meta-level regularizers, but should be compared with a few of them, at minimum. If the proposed learnable early stopping method cannot outperform them, why do we have to use this method? This concern is very significant in my view, and I cannot give acceptance before it is resolved. [Minor] It would be interesting to see if the proposed method is effective in solving few-task meta-learning problem, where we are given only a few meta-training tasks such that it is easy to meta-overfit. (See https://arxiv.org/abs/2106.02695)
ICLR
Title Early-Stopping for Meta-Learning: Estimating Generalization from the Activation Dynamics Abstract Early-stopping, a fundamental element of machine learning practice, aims to halt the training of a model when it reaches optimal generalization to unseen examples, right before the overfitting regime on the training data. Meta-Learning algorithms for few-shot learning aim to train neural networks capable of adapting to novel tasks using only a few labelled examples, in order to achieve good generalization. However, current early-stopping practices in meta-learning are problematic since there may be an arbitrary large distributional shift between the meta-validation set coming from the training data, and the meta-test set. This is even more critical in few-shot transfer learning where the meta-test set comes from a different target dataset. To this end, we empirically show that as meta-training progresses, a model’s generalization behaviour on a target distribution of novel tasks can be estimated by analysing the dynamics of its neural activations. We propose a method for estimating optimal early-stopping time from the neural activation dynamics of just a few unlabelled support examples from the target distribution, and we demonstrate its performance with various meta-learning algorithms, few-shot datasets and transfer regimes. 1 INTRODUCTION Deep Learning research has been successful at producing algorithms and models that, when optimized on a distribution of training examples, generalize well to previously unseen examples drawn from that same distribution. Meta-Learning is in a way, a natural extension of this aim, where the model has to generalize to not only new data points, but entirely new tasks. Important practical progress has been made in this direction over the past few years. Yet it remains sparsely understood what are the underlying phenomena behind the transitioning of a neural network’s generalization to novel tasks, from the underfitting to the overfitting regime, with the optimal generalization happening in between. Early-stopping, a fundamental element of machine learning practice, maximizes generalization by aiming to halt the training at the frontier between those two regimes, when generalization is optimal. It is computed on a validation set, made of held out examples from the training data, which serves as a proxy for the test data. As a regularizer, ”Early-stopping should almost be used universally. [...] It is probably the most commonly used form of regularization in deep learning. [...] a very unobtrusive form of regularization, in that it requires almost no change in the underlying training procedure” (Goodfellow et al., 2016). However in meta-learning, implementing early-stopping is problematic since there may be an arbitrarily large distributional shift between the meta-validation tasks (drawn from the training data) and the meta-test tasks. Moreover, meta-learning typically involves learning a new task from very few labelled examples, too few to allow constituting a validation set from it. In this work, we study the relation between generalization in Meta-Learning and neural activation dynamics : Given a neural network and a set of input examples, the network’s responses measured at all of its hidden-layers are what we define as the neural activations, and the evolution of those responses during the learning time (meta-training) is what we define as the neural activation dynamics. The main contributions of our work can be summarized as follows : 1. We empirically show that in Meta-Learning, a simple function of the neural activation dynamics, for just a few unlabelled target examples, can reveal the variation of generalization to a distribution of novel target tasks (Sec.2.2), and how this function can be learned (Sec.3). 2. We propose a novel method for early-stopping in Meta-Learning, applied in many settings of Few-Shot Learning and Few-Shot Transfer Learning (Sec.5). 2 META-LEARNING AND FEW-SHOT CLASSIFICATION Meta-Learning algorithms generally aim to train a model f(x; θ) on a set of source problems, often presented as a distribution over tasks p(Ttrain), in such a way that the model is capable of generalizing to new, previously unseen tasks from a target distribution p(Ttarget). When applied to classification, meta-learning has often been formulated in the past by defining a task T that involves the m-way classification of input examples x among m distinct classes. The tasks from p(Ttrain) and p(Ttarget) are made of classes drawn from two disjoint sets Ctrain and Ctarget. A novel task thus involves new classes not seen during training. In few-shot learning, the inputs x of the training and target tasks come from a same input distribution p(x) (e.g., an image dataset) but conditioned on their respective classes, i.e. p(xtrain) = p(x|y ∈ Ctrain) and p(xtarget) = p(x|y ∈ Ctarget). The few-shot aspect means that for a given novel task Ttarget, only a very few labelled examples are available, typically k examples per class, and the model uses this support set of examples S = {(x,y)}1..k to adapt its parameters θ to the task, then its accuracy is evaluated on new query examples from Ttarget. The meta-learning generalization Acctarget, for a model f(x; θt) at time t (after t training iterations) to a distribution p(Ttarget), is thus the query accuracy averaged over multiple target tasks: Acctarget . = ETi∼p(Ttarget) [ E(x,y)∼Ti\Si [ 1{argmax(f(x; θit)) = y} ]] (1) where for each new task Ti the adapted solution θit is often obtained by performing T steps of gradient descent (full-batch) on the cross-entropy loss L(f,Si) with respect to θt. In few-shot transfer learning, not only are the class sets Ctrain and Ctarget disjoint, but the marginal p(xtarget) can be arbitrarily different from p(xtrain) (e.g. from a different image dataset). 2.1 EARLY-STOPPING BASED ON VALIDATION SET PERFORMANCE CAN LEAD TO SUB OPTIMAL GENERALIZATION IN META-LEARNING In a standard supervised learning setup, a subset of examples is held out from the training data to constitute a validation set. Since the validation accuracy is a good proxy for the test accuracy, early-stopping is performed by halting training when the validation accuracy reaches its maximum. In Meta-Learning for few-shot classification, the validation set is made of held out classes from the training data to constitute the validation task distribution p(Tvalid), and early-stopping happens at t∗valid=argmaxtAccvalid. But this can lead to a sub-optimal generalization (see Fig.2) because of the potential distributional shift between p(Ttarget) and p(Tvalid) especially in few-shot transfer learning where it can be arbitrarily large. Estimating the out-of-distribution generalization Acctarget in Meta-Learning thus requires some minimal amount of information about p(Ttarget). However, the few-shot paradigm severely restricts the availability of data from p(Ttarget). The support examples from target tasks are accessible, but the model doesn’t control how many new tasks will actually be presented, there could be several thousands or very few. However, if there is a need to early-stop and generalize to some target task distribution p(Ttarget), then the model will need to solve, at the very least, a single task from p(Ttarget), and thus has access to at least a single support set S. We thus propose to only use a few examples, typically the support set of a single new task (e.g. 5 images). This also implies that any algorithm estimating the optimal early-stopping time t∗ should have a very low sample-wise (and task-wise) variance for its estimate of t∗. 2.2 CAN NEURAL ACTIVATION DYNAMICS FOR A FEW TARGET INPUTS ALLOW US TO MAKE INFERENCES ABOUT GENERALIZATION? . In this work we search for an observable property of deep neural networks that can help us make inferences about meta-learning generalization to a given target problem as training time t progresses. We thus hypothesize the existence of a function ψ of f(x; θt) and p(Ttarget) such that ψ(f, p(Ttarget), t) ∝ Acctarget(t), and set on to find ψ. More specifically, we want to estimate t∗ = argmaxtAcctarget(t) using only a few target examples (a single support set) when approximating ψ. To support a general statement on generalization in Meta-Learning and the nature of ψ, we conduced experiments across a wide range of meta-learning settings. We used different meta-learning algorithms, three of the most pivotal ones of the field : MAML (Finn et al., 2017), Prototypical Networks (Snell et al., 2017), and Matching Networks (Vinyals et al., 2016). We considered both the few-shot learning and few-shot transfer learning regimes, with 1-shot and 5-shot experiments, and various few-shot datasets for p(Ttrain) and p(Ttarget), such as MiniImagenet and Omniglot, but also many others included in Meta-Dataset (Triantafillou et al., 2020). We also used different architectures : the standard 4-layer CNN proposed by (Vinyals et al., 2016), as well as a ResNet as used in (Triantafillou et al., 2020). For full experimental details, refer to Appendix A. Here we present the experimental results that progressively suggest that variation of generalization can be efficiently estimated from simple metrics on the neural activation dynamics : Observation 1: For a deep neural network, the variation of target generalization, as a function of training time, frequently correlates with simple statistics characterizing how its feature extractor responds to the target input distribution: In many meta-learning settings we observed that Acctarget is proportional to relatively simple metrics (denoted as ψ1). One such metric is the expected inner product between representations: ψ1(ϕ(X)) . = Exi,xj∼p(x)[ϕ(xi) Tϕ(xj)], (2) where ψ1 is measured at the output of the feature extractor ϕ, where f(x) = g(ϕ(x)), and captures both the similarity among representation vectors and their norm. Moreover, we measure ψ1 on the representations of the target inputs Xtarget, before adapting the model to new tasks. This relation seems approximately independent of the target class identities Ytarget, and predominantly depends on how the feature extractor represents the marginal distribution over the input Xtarget of the target problem, i.e. ψ1(ϕ(Xtarget), t) ∝ Acctarget(t). Example in Fig.4, complete results in App.B.2.1. Figure 4: Average target task accuracy as a function of training iteration: Acctarget(t). Observation 1: The variation of generalization (Acctarget), for a deep neural network, frequently correlates with simple statistics characterizing how its feature extractor ϕ responds to the target input distribution p(xtarget). For example, here we show ψ1, which is simply the expected inner product between individual representation vectors, which follows the same trend as Acctarget(t) and peaks roughly at the same time. Here ψ1 is vertically rescaled to match the range of the target accuracy. See App.B.2.1 for full experiments across multiple settings. Observation 2: A simple statistic on the neural activations, if computed at the right layer of a network, can often strongly correlate with generalization, but this layer may change depending on the setting. In a deep neural network a feature extractor is composed ofL hidden-layers: ϕ(x) = (ϕL◦ϕL−1◦...◦ϕ1)(x). In many settings, ψ1 measured at the last layer ϕL isn’t proportional to Acctarget, but the relation instead occurs at a lower layer ϕl. Thus, rather than just examining the last layer representation dynamics, we often need to consider the neural activations of the whole feature extractor (See Fig. 3 and Eq. 3). More precisely, we shall consider the evolution throughout time or the neural activation dynamics of a network, i.e.: ψ(Φ(Xtarget, t)) ∝ Acctarget(t), or expressed in the form of Eq. 4. See an example at Fig.5, or App.B.2.2 for full experiments across multiple settings. Φ(X) . = {ϕl(X) | l ∈ [1..L]} (3) Φ(X, t) . = Φ(X | θt) (4) Observation 3: Simple statistics of the activations correlate to generalization, but they may change. One needs to find the right statistic depending on the setting. In our experiments we observe that for many settings Acctarget(t) doesn’t consistently correlate with ψ1(ϕl(Xtarget)) at any specific layer l. From this we conjectured that perhaps ψ1 is a special case in a more general function space Ψ, a hypothesis space or set of functions that predict generalization, or more formally : Ψ .= {ψ | ψ(Φ(Xtarget, t)) ∝ Acctarget(t)} where |Ψ| > 1. From this perspective we ultimately care about finding the function ψ in Ψ that, given the meta-learning setting involved, minimizes the true objective d defined below : d = max t (Acctarget(t))−Acctarget(argmax t ψ(Φ(Xtarget, t))) (5) The natural question that follows is, what may be the characteristics of such function space Ψ ? We address this by formulating a few inductive biases and assumptions which then inform our subsequent experiments. We first note that the complexity of Ψ must be large enough so that, in most meta-learning settings in the few-shot regime, Ψ contains a good solution function ψ∗ such that d is low. The complexity shouldn’t be too large either, since to find ψ∗ we will optimize an indirect empirical objective d̂ (Sec.3). This is especially important in few-shot transfer learning. Furthermore, since Φ(Xtarget) itself has a probability distribution, our hypothesis space Ψ should be a set of functions ψ that are sample estimators of some population statistics of the distribution of Φ(Xtarget). However, since we only have access to a very few samples, those statistics should be relatively simple so as to keep down the standard error of their estimators. We propose to use descriptive statistics based on moments, and limit them up to the second-order (higher-order moments are harder to estimate accurately). Finally, since we ultimately need to find a one-dimensional curve ψ∗(t) to compare to Acctarget(t), our hypothesis space Ψ should contain scalar-valued functions, which we get by computing moments on norms of the activation vectors. In our experiments we observe that when Acctarget(t) doesn’t consistently correlate with ψ1(ϕl(Xtarget)) at any specific layer l, it does typically correlate with one of the following alternative metrics : the norm of activations ψ2; the dispersion of activations ψ3; or the feature-wise variance of activations ψ4. We have observed that generalization sometimes actually correlate with the negative (i.e. −ψ) of either of ψ1 to ψ4. ψ2 . = Ex[‖ϕl(x)‖22] (6) ψ3 . = Ex[‖ϕl(xi)−ϕl(xj)‖22] (7) ψ4 . = Ex[Vark(ϕl(xi,k))] (8) All the metrics ψ1 to ψ4 and their negatives can actually be expressed by a linear combination of the following moments (assuming ReLU activation functions) : m1 = 1 n n∑ i=1 ‖ϕl(xi)‖21 m2 = 1 n n∑ i=1 ‖ϕl(xi)‖22 m3 = 1 n2 n∑ i=1 n∑ j=1 ‖ϕl(xi)− ϕl(xj)‖22 (9) such that those moments define the function space Ψ = {ψ(ϕl(X);w) | w ∈ R3, l ∈ [1..L]} where ψ(ϕl(X);w) = w1m1 + w2m2 + w3m3 and w = [w1, w2, w3] ∈ R3. This parametric function space Ψ, while being relatively simple, can express a variety of properties of activations, such as their norm, dispersion, feature-wise variance, inner product, positively or negatively, or even a combination of properties. In Tab.3 of App. B.3 we have experimentally verified that Ψ has enough complexity to contain a good solution function ψ∗, for many meta-learning settings, both in few-shot learning and few-shot transfer learning.Observation 4: The Variation of generalization can be estimated by using just a few target input examples: Given a function ψ(ϕl(Xtarget)) which correlates with generalization Acctarget(t). when ψ is measured on the activation dynamics for just a single unlabelled support set S, the estimated early-stopping time t∗ψ = argmaxt ψ(t) typically shows very low variance with respect to which task is used for the estimation (Fig. 7). We conjecture that this might be due lack of dependency of ψ on Ytarget, where ψ a more general property of the activations for p(xtarget). This makes early-stopping from such function ψ practical. 3 INFERRING WHICH FUNCTION OF THE NEURAL ACTIVATION DYNAMICS CORRELATES TO GENERALIZATION, AND AT WHICH LAYER TO MEASURE IT Our results in Sec.2.2 suggest that in Meta-Learning there exists a function ψ that, when measured on the neural activation dynamics Φ(Xtarget, t), closely relates to the target generalization Acctarget(t). However since this function is not unique and depends on the meta-learning setting involved (metalearning algorithm, neural architecture, training and target distributions, etc), we propose to cast the discovery of ψ as a machine learning problem. See Fig. 8, which schematizes our framework. At this point we know that, given a meta-learning setting, our function space Ψ should contain a good solution ψ∗ such the true objective d is low. Now we need a way to actually find ψ∗. We can do so by optimizing an indirect, empirical objective d̂, defined below. 3.1 Few-Shot Learning (FSL) : Inferring ψ∗ and l∗ from the validation dynamics and accuracy In few-shot learning, novel tasks from p(Ttarget) involve previously unseen classes but the input domain of Xtarget can be assumed to be similar to that of Xtrain, and therefore to that of Xvalid. We thus use the dynamics Φ(Xvalid, t) and the validation accuracy Accvalid (as a proxy for Acctarget) in order to learn the optimal function ψ∗ and the layer l∗ where it should be measured, and we do so by minimizing the empirical objective d̂FSL (Eq.10). We then compute our actual early-stopping time estimate t̂∗FSL when ψ ∗(ϕl∗(Xtarget, t)), measured on the few support input examples of a single target tasks, reaches its peak (Eq.11). d̂FSL = max t Accvalid(t)−Accvalid(argmax t ψ(ϕl(Xvalid, t);w))) (10) t̂∗FSL = argmax t ψ(ϕl∗(Xtarget, t);w ∗) where w∗, l∗ = argmin w,l d̂FSL (11) 3.2 Few-shot transfer learning (FSTL) : Meta-overfitting often happens when the target dynamics diverge from those of the source input domain When the target problem is from an entirely new dataset, we can’t use Accvalid as a proxy for Acctarget, and we need another objective function to learn ψ∗. However, we can learn ψ∗ by analyzing Φ(Xtarget, t), the neural activation dynamics of the target domain, and comparing them with Φ(Xvalid, t). Assume that for a given target problem, optimal generalization doesn’t happen at the same time as for the source domain, i.e., t∗ 6= t∗valid, and more precisely, assume t∗ < t∗valid. Typically, a generalization curve is generally increasing between t0 and its maximum, whereas it is generally decreasing after the maximum. This implies that the curves of Acctarget(t) and Accvalid(t) are positively correlated between t0 and t∗, as they are both increasing, whereas they are negatively correlated between t∗ and t∗valid, since Acctarget(t) is decreasing while Accvalid(t) is still increasing. In a sense, the two generalization behaviors “diverge” at t∗, since at that moment their correlation goes from positive to negative (See Fig.9a). Since here we assume the neural activation dynamics can characterize the generalization behavior of a model, we conjecture that Φ(Xtarget, t) and Φ(Xvalid, t) might also “diverge” at t∗, under some function ψ(ϕl∗(X, t);w∗), such that the sample Pearson correlation r, of ψ(Φ(Xtarget, t),w∗) and ψ(Φ(Xvalid, t),w∗) also goes from positive to negative near t∗ (See Fig.9b). Our experiments indeed suggest that functions ψ exhibiting more divergence are more likely to capture generalization to the target problem. This analysis can be found in App.***. We thus search for the weights w∗ and hidden-layer l∗ so as to observe the most negative correlation between ψ(ϕl(Xtarget, t);w) and ψ(ϕl(Xvalid, t);w) in the time interval [t0, t∗valid] (Eq.12). We then estimate t∗ by finding the time t̂∗FSTL when ψ ∗ target(t) and ψ ∗ valid(t) diverge (Eq.13). See Fig.10a,10b for a demonstration. d̂FSTL = r(ψtarget(t), ψvalid(t)) = ∑ t(ψtarget(t)− ψ̄target(t))(ψvalid(t)− ψ̄valid(t))∑ t(ψtarget(t)− ψ̄target(t))2 ∑ t(ψvalid(t)− ψ̄valid(t))2 (12) t̂∗FSTL = argmax t ( t× r ( ψ∗target, ψ ∗ valid, [t0, t < t ∗ valid] )) (13) with shorthand notations ψtarget(t) . = ψ(ϕl(Xtarget, t);w) and ψvalid(t) . = ψ(ϕl(Xvalid, t);w) and ψ̄(t) denotes an average over t, and ψ∗ .= ψ(ϕl∗(·);w∗). Here again we minimize an empirical objective, and w∗, l∗ = argminw,l .d̂FSTL. 4 RELATED WORK In recent years, some works have started to analyze theoretical aspects of gradient-based metalearning. (Finn et al., 2019) examine the online Meta-Learning setting, where in online learning the agent faces a sequence of tasks, and they provide a theoretical upper bound for the regret of MAML. (Denevi et al., 2019) study meta-learning through the perspective of biased regularization, where the model adapts to new tasks by starting from a biased parameter vector, which we refer in this work as the meta-training solution. For simple tasks such as linear regression and binary classification, they prove the advantage of starting from the meta-training solution, when learning new tasks via SGD. They use an assumption on the task similarity where the weight vectors parameterizing the tasks are assumed to be close to each other. Working in the framework for Online Convex Optimization where the model learns from a stream of tasks, (Khodak et al., 2019) make an assumption that the optimal solution for each task lies in a small subset of the parameter space and use this assumption to design an algorithm such that the “Task-averaged-regret (TAR)” scales with the diameter of this small subset of the parameter space, when using Reptile (Nichol et al., 2018), a first-order meta-learning algorithm. Bearing a stronger relation to our approach, (Guiroy et al., 2019) empirically study the objective landscapes of gradient-based meta-learning, with a focus on few-shot classification. They notably observed that average generalization to new tasks appears correlated with the average inner product between their gradient vectors. In other words, as gradients appear more similar in inner product, the model will, on average, better generalize to new tasks, after following a step of gradient descent. More recently, a few works have studied the properties of the feature extractor ϕ in the context of Meta-Learning. Notably, the authors of (Raghu et al., 2019) showed empirically that when neural networks adapting to novel task, in the few-shot setting with MAML and MiniImagenet, the feature extractor network is approximately invariant, while the final linear classifier undergoes significant functional changes. They then performed experiments where ϕ is frozen at meta-test time, while only the classifier g is fine-tuned, and observed very similar generalization performance to the regular fine-tuning procedure. Intuitively, these results suggest that the variation, of generalization along meta-training time t, might be predominantly driven by some evolving but unknown property of the feature extractor. The authors of (Goldblum et al., 2020) observed that generalization in few-shot learning was related to how tightly embeddings from new tasks were clustered around their respective classes. However, the authors of (Dhillon et al., 2019) observed that the embeddings at the output of ϕL were poorly clustered around their classes, but that clustering was important when measuring the logit outputs of g. This is similar to what the authors (Frosst et al., 2019) observed when dealing with new Out-of-Distribution examples. This suggests that if generalization is related to a property of the feature extractor, this property might be class agnostic. This is also something that we observed in our very early experiments (expected inner product between representation vectors strongly correlated with generalization, irrespective of taking the class identities into account). But in our work we observed that this property might not only depend on the output of the feature extractor. Earlier works demonstrated that in transfer learning, intermediate layers of ϕ might be critical in the ability of the model to transfer knowledge (Yosinski et al., 2014). 5 EARLY-STOPPING FOR META-LEARNING BY ANALYZING THE NEURAL ACTIVATION DYNAMICS OF A FEW TARGET INPUT EXAMPLES Here we present experimental results on the performance of our early-stopping method. For each experiment, we only use the unlabelled input examples from the support set of a single target task to evaluate the neural activation dynamics. At the beginning of an experiment, we thus randomly sample a task Ti from p(Ttarget) and only keep its set of support input examples. We repeat the experiment for multiple (50) independently and identically distributed support sets from p(Ttarget), and take the average performance. Each such experiment is then repeated for 5 independent training runs. As a baseline for comparison, we use the validation early-stopping approach. Since ψ1 to ψ4 work in practice, we will use them as our function space Ψ but the method that we develop applies as well to the continuous function space defined above, and we present some experimental results in App.B.5 where we apply our early-stopping method with the continuous function space. We begin by demonstrating our proposed early-stopping method in few-shot transfer learning, across various target dataset, and present the results in Tab.1. We use the standard 4-layer CNN architecture, with MAML, trained on MiniImagenet 5-way 1-shot. When the target dataset is Omniglot, the performance of the validation baseline (51%) is significantly lower than the optimal generalization (76%) presumably because of the distributional shift between MiniImagenet and Omniglot. In such scenario our method appears to offer a significant advantage over the baseline, since we obtain 75% in target accuracy, quite close to the optimal generalization. In scenarios where the target domain is arguably more similar to that of the source domain, e.g. transfer from MiniImagenet to Imagenet, the early-stopping from the validation accuracy yields a performance (35.0%) closer to the optimal generalization (35.6%), and in such case our method performs only slightly worse (34.8%) than the validation baseline. We observe a similar trend when the model is trained on the Quickdraw dataset : When transferring to Omniglot, the validation baseline leads to sub-optimal generalization, but estimating the target accuracy from the neural activation dynamics allows us to halt the training close to the optimal time. When transferring to Traffic Sign, the baseline performance yields reasonable performance, and our method is roughly on par with it. From this point, we will focus on settings where there is a significant gap in performance between the validation baseline and optimal generalization, for example the transfer from Birds to Quickdraw, the present in our illustration of Sec.2.1. Next we present similar experiments with two other meta-learning algorithms : Prototypical Networks and Matching Networks, which are shown in Tab.2. 6 CONCLUSION In this work we have presented empirical evidence that the overfitting point of Meta-Learning for deep neural networks for few-shot classification can often be estimated from simple statistics of neural activations and how they evolve throughout meta-training time. Our results suggest that key properties, or statistics of how feature extractors respond to the target input distribution can be found which are simple enough to be estimated from just a few unlabelled target input examples. However, the specific function of the activations, and the layer at which to measure them, need to be inferred. We demonstrate that these functions and layers of interest can be inferred and used to guide early stopping – leading to a new, and effective method for early stopping which represents a significant departure for the de facto standard practice of using a validation set. In few-shot learning these ingredients can be inferred from how the neural activation dynamics of the validation data relate to the validation accuracy. In few-shot transfer learning, they are inferred through searching for which function (in a given function space) and at which layer, that the activation dynamics of the target input domain “diverge” the most from those of the source domain. Finally, we have demonstrated how this approach can be used to optimize for target generalization in practice to perform early-stopping and thus improve overall generalization to distributions of novel few-shot classification tasks, while only using unlabelled support examples from a single target task. A EXPERIMENTAL DETAILS CNN : We use the architecture proposed by Vinyals et al. (2016) which is used by Finn et al. (2017), consisting of 4 modules stacked on each other, each being composed of 64 filters of of 3 × 3 convolution, followed by a batch normalization layer, a ReLU activation layer, and a 2 × 2 max-pooling layer. With Omniglot, strided convolution is used instead of max-pooling, and images are downsampled to 28 × 28. With MiniImagenet, we used fewer filters to reduce overfitting, but used 48 while MAML used 32. As a loss function to minimize, we use cross-entropy between the predicted classes and the target classes. ResNet-18 : We use the same implementation of the Residual Network as in (Triantafillou et al., 2020). For most of the hyperparameters, we follow the setup of (Triantafillou et al., 2020), but we set the main few-shot learning hyperparameters so as to follow the original MAML setting more closely, and in each setting, we consider a single target dataset at a time, with a fixed number of shots and classification ways. We use 5 steps of gradient descent for the task adaptations, 15 shots of query examples to evaluate the test accuracy of tasks. We don’t use any learning rate decay during meta-training, and step-size of 0.01 when finetuning the models to new tasks. Datasets : We use the MiniImagenet and Omniglot datasets, as well as the many datasets included in the Meta-Dataset benchmark (Triantafillou et al., 2020). B COMPLETE EXPERIMENTAL RESULTS B.1 THE ISSUE OF USING A VALIDATION SET FOR EARLY-STOPPING IN META-LEARNING B.2 THE RELATION BETWEEN THE NEURAL ACTIVATION DYNAMICS AND GENERALIZATION TO NOVEL TASKS B.2.1 RELATION BETWEEN THE REPRESENTATION SPACE OF THE FEATURE EXTRACTOR AND TARGET GENERALIZATION Here we present experimental results to support the observation 1 that we make in 2.2, showing that the variation of generalization along meta-training time can be captured by a function of the neural activation dynamics that is independent of class labels. B.2.2 NEURAL ACTIVATION DYNAMICS : DIFFERENT LEVELS OF THE FEATURE EXTRACTOR CAN REVEAL THE VARIATION OF GENERALIZATION B.2.3 DIFFERENT FUNCTIONS OF THE NEURAL ACTIVATION DYNAMICS CAN REVEAL THE VARIATION OF GENERALIZATION By expending the experimental setup further, we observed instances where a given metric had strong correlation with generalization but in a negative sense, i.e. that it was actually its argmin that coincided with optimal early-stopping time t∗. See Fig. 14 for examples of this phenomenon. We later observed that other statistical estimators can correlate with generalization. 0 10 20 30 40 50 t (iters) 0.54 0.56 0.58 0.60 0.62 0.64 0.66 Ac cu ra cy expected inner product expected square L2 norm expected square L2 dispersion Acctarget (a) Exclusive correlation between a specific metric and generalization 0 25000 50000 75000 100000 125000 150000 175000 t (iters) 0.30 0.35 0.40 0.45 0.50 0.55 Ac cu ra cy Acctarget(t) (t) (b) Expected l2 Norm 0 20000 40000 60000 80000 100000 120000 140000 t (iters) 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 Ac cu ra cy Acctarget(t) (t) (c) Expected l2 Dispersion 0 20000 40000 60000 80000 t (iters) 0.0 0.2 0.4 0.6 0.8 1.0 Ac cu ra cy Acctarget(t) (t) (d) Expected Feature-Wise Variance Figure 15: Different metrics of the representation space may have strong correlation with generalization, other than the expected inner product of Eq. 2). a) Prototypical, VGG Flower, 5-way 1-shot : out of three metrics which in other cases may be related with generalization (as in b), c), d) and Sec. ??), here only the expected l2 dispersion has a strong relation with generalization. b) Expected l2 norm (Eq. 6); c) Expected square l2 dispersion (Eq. 7, Prototypical Network, VGG Flower; d) Expected feature-wise variance (Eq. 8), Prototypical Network, Omniglot to Quickdraw. These results motivate our approach of considering a family of functions Ψ in which we must find the optimal function ψ∗ given the setting, rather than trying to discover a single universal metric that would correlate to generalization in all scenarios. Even if such metric exists, it may not be estimated with enough efficiency to satisfy the requirement of using only a single support set to estimate t∗. B.2.4 FUNCTIONS OF THE NEURAL ACTIVATION DYNAMICS : TASK-WISE VARIANCE OF THE ESTIMATE t∗ψ Here we present empirical results on the task-wise variance as discussed in the observation 4 of Sec. 2.2. We begin by showing the task-wise variance for few-shot accuracy when evaluated with a single target task and assuming access to the query examples (15 shots). Few-shot accuracy exhibits a high variance as different tasks will peak at much different times, making it unfit to estimate t∗. On the other hand, for the metrics from Sec. 2.2, which are based on small order statistics (mean and variance) the estimated early-stopping time exhibits drastically lower variance. See Fig. 16 for an example, where we use MAML in few-shot learning (5-way 1-shot) with the Aircraft dataset, and where we use the expected square l2 norm for the metric. As we can see, in Fig. 16, measuring the metric on different tasks merely offsets the response curve but bears almost no change on the trend of the curve itself. This also relates to our assumption that the variation of target generalization in Meta-Learning might be linked to a function of the neural activation dynamics that is class agnostic. B.3 CAPACITY OF THE CONTINUOUS FUNCTION SPACE Ψ (DEFINED BY THE THREE MOMENTS OF EQ.9) TO CONTAIN GOOD SOLUTIONS ψ∗ The momentsm1,m2 andm3 of Eq.9 define the parametric function space Ψ = {ψ(ϕl(X);w) | w ∈ R3, l ∈ [1..L]} where ψ(ϕl(X);w) = w1m1 + w2m2 + w3m3 and w = [w1, w2, w3] ∈ R3. This parametric function space Ψ. Here we have experimentally observed that Ψ has enough complexity to contain a good solution function ψ∗, for different meta-learning settings, both in few-shot learning and few-shot transfer learning, as shown in Tab.3. B.4 LEARNING ψ∗ IN FEW-SHOT TRANSFER LEARNING B.4.1 FINDING w∗ AND l∗ WHERE ψtarget(t) AND ψvalid(t) “DIVERGE” THE MOST As we added more experimental settings for Few-Shot Transfer Learning, we observed instances where, for a given metric measured at the representation space ϕL, there was no strong link with generalization, but when measuring the metric at lower hidden-layers than ϕL, then we observed a strong correlation with generalization. We illustrate this in Fig. 17. Theses results motivated our approach of considering the whole neural activation dynamics (all layers), rather than only those the final layer of the feature extractor alone, in our search for functions linked to generalization. Then in Fig.18 we conducted a more systematic analysis, but concerned with identifying the right functions ψ∗, with results suggesting that functions ψ showing stronger “divergence” (negative correlation) between the target and validation dynamics, will more likely lead to a higher target accuracy if we stop at their peak time. (a) Transfer : Omniglot to MiniImagenet, MAML. The critical depth, i.e. the one where measuring the expected inner product (here marked as RIP) predicts generalization on the target domain, is at layer 1, even though the critical depth for the source domain was at layer 4. (b) Transfer : MiniImagenet to Omniglot, MAML. The critical depth for the target domain is at layer 4, the same as for the source domain. Figure 17: Generalization can correlate to a metric at different levels of neural activations. Here the critical layer l∗ (squared in red) is identified by searching for the highest divergence between the validation and target neural activation dynamics. Correlation between D(ψtarget, ψtarget, ) and Generalization MAML Quickdraw ↓ Omniglot Prototypical Network Omniglot ↓ Quickdraw Matching Network MiniImagenet ↓ Omniglot 0.82 0.75 0.81 Table 4: Correlation betweenD(ψtarget, ψtarget, ) and Generalization, for different few-shot learning settings. The correlation is computed as in the analysis of Fig. 18c. The results show that functions exhibiting high divergence between the validation and target neural activation dynamics are likely to lead to good generalization performance on the target distribution. B.5 EVALUATING THE PERFORMANCE OF OUR EARLY-STOPPING METHOD WHEN USING THE CONTINUOUS FUNCTION SPACE Ψ DEFINED BY THE THREE MOMENTS OF EQ.9 Here we present a few experimental results where we apply our early-stopping methof in the continuous function space Ψ. Since there are only three weights to tune, namely w1, w2 and w3, we don’t suffer from the curse of dimensionality, which is the classic motivation for using gradient-based optimization of neural networks with many parameters. This allows for a search based optimization of w. 0.00 0.05 0.10 0.15 0.20 0.25 D( target, valid) 0.74 0.76 0.78 0.80 0.82 0.84 0.86 Pe rfo rm an ce Corr. coeff. R= 0.422 P-Value= 0.0 (a) Functions with high divergence between valid and target dynamics are more likely to achieve higher generalization 0.00 0.05 0.10 0.15 0.20 0.25 Avg Divergence (10 bins) 0.76 0.78 0.80 0.82 0.84 0.86 Av g Pe rfo rm an ce (1 0 bi ns ) (b) Average divergence vs. average performance 0.00 0.05 0.10 0.15 0.20 0.25 Avg Divergence (10 bins) 0.80 0.82 0.84 0.86 Av g Pe rfo rm an ce (1 0 bi ns ) Corr. coeff. R= 0.8255 P-Value= 0.0061 (c) Strong correlation between average divergence and average performance 0 5 10 15 20 25 30 35 t (iter) 0.75 0.80 0.85 0.90 0.95 1.00 Ac cu ra cy * target * valid Acctarget (d) Solution function measured on the valid and target examples Table 6: Performance of our method - Few-Shot Transfer Learning, Prototypical Network Algorithm Matching Network Source dataset MiniImagenet Target dataset Omniglot Baseline 73.75% Our method 75% Table 7: Performance of our method - Few-Shot Transfer Learning, Matching Network
1. What is the main contribution of the paper regarding few-shot classification? 2. What are the strengths and weaknesses of the proposed techniques for optimal early stopping? 3. How does the reviewer assess the clarity and organization of the paper? 4. What are the concerns regarding the assumptions made in the paper? 5. Can the proposed methods be applied to standard machine learning tasks? 6. How representative are the experiments and metrics used in the paper? 7. Are there any minor comments or suggestions for improving the paper?
Summary Of The Paper Review
Summary Of The Paper The submission investigates a few-shot classification setting in which access to a few unlabelled meta-test set examples is allowed; the submission proposes to use these examples to determine the optimal early stopping time for meta-test set performance, which depends on the relationship between the meta-training and meta-test tasks. Several methods to make use of the examples are proposed, including some are simple functions of the statistics of the activations of the main model over time, and some that learn a parameterized function of the representations of the test set examples. Review Main Review Strengths Early stopping may be particularly relevant in the few-shot transfer learning setting, in which test tasks differ greatly from training tasks. The details of the submission are easy to understand and the paper is for the most part clear, although some sections are unnecessarily dense. Weaknesses It is not clear that the proposed techniques are particularly relevant for meta-learning. The proposed techniques for optimal early stopping could be straightforwardly employed in standard machine learning setups (in which there is no grouping of data points into tasks). Related to the above point, the submission misses numerous references that propose techniques for automating early stopping in the context of standard machine learning tasks, including some that do not require access to test set examples: Mahsereci, Maren, Lukas Balles, Christoph Lassner, and Philipp Hennig. "Early stopping without a validation set." arXiv preprint arXiv:1703.09580 (2017). Liu, Yinyin, Janusz A. Starzyk, and Zhen Zhu. "Optimized approximation algorithm in neural networks without overfitting." IEEE transactions on neural networks 19, no. 6 (2008): 983-995. Zhang, Xiao, Dongrui Wu, Haoyi Xiong, and Bo Dai. "Optimization Variance: Exploring Generalization Properties of DNNs." arXiv preprint arXiv:2106.01714 (2021). The submission could be greatly improved in clarity by separating the prescriptions / normative claims ("X should" / "we would like Y") from what is objective reality. In its current form, the text often confuses what is observed or known, with what is proposed or assumed for the narrative of the paper. As two examples: The motivation of changing the problem setting from the standard few-shot transfer learning setting to the setting of having access to some target task examples is justified on the basis of "a need to early-stop", which is circuitous. There should be an independent motivation provided for why this modified setting is a reasonable assumption. In Section 3.2 it is written: "Assume that for a given target problem, optimal generalization doesn’t happen at the same time as for the source domain, i.e., t ∗ ≠ t valid ∗ , and more precisely, assume t ∗ = t valid ∗ . Typically, a generalization curve is generally increasing between t 0 and its maximum, whereas it is generally decreasing after the maximum." However, this unimodal-peak phenomenon is not proven or thoroughly demonstrated anywhere in the paper. The broad reliability of the metrics is not clear. Figure 6 and Tables 1 & 2 depict a subset of the metrics in a subset of settings. However, it is unclear how representative this selection is of all possible few-shot transfer settings (datasets × meta-learning methods). Is there any metric that is useful in the majority of cases or on average? Relatedly, many metrics are evaluated directly on the test set, which does not give confidence that the metrics are robust. To avoid this aspect of overfitting, the experiments should have held out a second test set on which the best-performing metrics were then evaluated. Minor comments Minor, specific sections: Names for techniques like "deep learning" and "meta-learning" do not need to be capitalized. The first sentence ("Deep Learning research has been successful at producing algorithms and models that, when optimized on a distribution of training examples, generalize well to previously unseen examples drawn from that same distribution") could equally describe machine learning in general. Can you make it specific to deep learning, or write "machine learning?" "Important practical progress has been made in this direction over the past few years." But no citations are provided. I felt that the Goodfellow et al. (2016) quotation in the introduction was unnecessary and could have been paraphrased in the text. Figure 1: The diagram on the right depicts ptrain, pvalid, ptest as disjoint supports (since densities / colored regions are non-overlapping). The assumption in practice is instead that for the true underlying distributions, ptrain = pvalid= ptest . The requirement here of ptest being disjoint in the few-shot transfer case is too strong---there can merely be distribution shift. "This also implies that any algorithm estimating the optimal early-stopping time t∗ should have a very low sample-wise (and task-wise) variance for its estimate of t∗." This is not clear to me. Is this a prescriptive claim, or a description of reality? Figure 2, left: Is this real or imagined data? Are all loss curves in all settings considered monotonic in training iterations? Is Eq. 5 equivalent to a regret formulation? "All the metrics ψ 1 to ψ 4 and their negatives can actually be expressed by a linear combination of the following moments (assuming ReLU activation functions)" Is a derivation provided? Minor, general: Having the figures embedded in the middle of the text was confusing. I recommend placing figures only at the top of pages. In several cases, Appendix sections are not identified ("App.***.") title: The methods are shown on few-shot classification only, so I think a more appropriate title would replace meta-learning with few-shot classification.
ICLR
Title Early-Stopping for Meta-Learning: Estimating Generalization from the Activation Dynamics Abstract Early-stopping, a fundamental element of machine learning practice, aims to halt the training of a model when it reaches optimal generalization to unseen examples, right before the overfitting regime on the training data. Meta-Learning algorithms for few-shot learning aim to train neural networks capable of adapting to novel tasks using only a few labelled examples, in order to achieve good generalization. However, current early-stopping practices in meta-learning are problematic since there may be an arbitrary large distributional shift between the meta-validation set coming from the training data, and the meta-test set. This is even more critical in few-shot transfer learning where the meta-test set comes from a different target dataset. To this end, we empirically show that as meta-training progresses, a model’s generalization behaviour on a target distribution of novel tasks can be estimated by analysing the dynamics of its neural activations. We propose a method for estimating optimal early-stopping time from the neural activation dynamics of just a few unlabelled support examples from the target distribution, and we demonstrate its performance with various meta-learning algorithms, few-shot datasets and transfer regimes. 1 INTRODUCTION Deep Learning research has been successful at producing algorithms and models that, when optimized on a distribution of training examples, generalize well to previously unseen examples drawn from that same distribution. Meta-Learning is in a way, a natural extension of this aim, where the model has to generalize to not only new data points, but entirely new tasks. Important practical progress has been made in this direction over the past few years. Yet it remains sparsely understood what are the underlying phenomena behind the transitioning of a neural network’s generalization to novel tasks, from the underfitting to the overfitting regime, with the optimal generalization happening in between. Early-stopping, a fundamental element of machine learning practice, maximizes generalization by aiming to halt the training at the frontier between those two regimes, when generalization is optimal. It is computed on a validation set, made of held out examples from the training data, which serves as a proxy for the test data. As a regularizer, ”Early-stopping should almost be used universally. [...] It is probably the most commonly used form of regularization in deep learning. [...] a very unobtrusive form of regularization, in that it requires almost no change in the underlying training procedure” (Goodfellow et al., 2016). However in meta-learning, implementing early-stopping is problematic since there may be an arbitrarily large distributional shift between the meta-validation tasks (drawn from the training data) and the meta-test tasks. Moreover, meta-learning typically involves learning a new task from very few labelled examples, too few to allow constituting a validation set from it. In this work, we study the relation between generalization in Meta-Learning and neural activation dynamics : Given a neural network and a set of input examples, the network’s responses measured at all of its hidden-layers are what we define as the neural activations, and the evolution of those responses during the learning time (meta-training) is what we define as the neural activation dynamics. The main contributions of our work can be summarized as follows : 1. We empirically show that in Meta-Learning, a simple function of the neural activation dynamics, for just a few unlabelled target examples, can reveal the variation of generalization to a distribution of novel target tasks (Sec.2.2), and how this function can be learned (Sec.3). 2. We propose a novel method for early-stopping in Meta-Learning, applied in many settings of Few-Shot Learning and Few-Shot Transfer Learning (Sec.5). 2 META-LEARNING AND FEW-SHOT CLASSIFICATION Meta-Learning algorithms generally aim to train a model f(x; θ) on a set of source problems, often presented as a distribution over tasks p(Ttrain), in such a way that the model is capable of generalizing to new, previously unseen tasks from a target distribution p(Ttarget). When applied to classification, meta-learning has often been formulated in the past by defining a task T that involves the m-way classification of input examples x among m distinct classes. The tasks from p(Ttrain) and p(Ttarget) are made of classes drawn from two disjoint sets Ctrain and Ctarget. A novel task thus involves new classes not seen during training. In few-shot learning, the inputs x of the training and target tasks come from a same input distribution p(x) (e.g., an image dataset) but conditioned on their respective classes, i.e. p(xtrain) = p(x|y ∈ Ctrain) and p(xtarget) = p(x|y ∈ Ctarget). The few-shot aspect means that for a given novel task Ttarget, only a very few labelled examples are available, typically k examples per class, and the model uses this support set of examples S = {(x,y)}1..k to adapt its parameters θ to the task, then its accuracy is evaluated on new query examples from Ttarget. The meta-learning generalization Acctarget, for a model f(x; θt) at time t (after t training iterations) to a distribution p(Ttarget), is thus the query accuracy averaged over multiple target tasks: Acctarget . = ETi∼p(Ttarget) [ E(x,y)∼Ti\Si [ 1{argmax(f(x; θit)) = y} ]] (1) where for each new task Ti the adapted solution θit is often obtained by performing T steps of gradient descent (full-batch) on the cross-entropy loss L(f,Si) with respect to θt. In few-shot transfer learning, not only are the class sets Ctrain and Ctarget disjoint, but the marginal p(xtarget) can be arbitrarily different from p(xtrain) (e.g. from a different image dataset). 2.1 EARLY-STOPPING BASED ON VALIDATION SET PERFORMANCE CAN LEAD TO SUB OPTIMAL GENERALIZATION IN META-LEARNING In a standard supervised learning setup, a subset of examples is held out from the training data to constitute a validation set. Since the validation accuracy is a good proxy for the test accuracy, early-stopping is performed by halting training when the validation accuracy reaches its maximum. In Meta-Learning for few-shot classification, the validation set is made of held out classes from the training data to constitute the validation task distribution p(Tvalid), and early-stopping happens at t∗valid=argmaxtAccvalid. But this can lead to a sub-optimal generalization (see Fig.2) because of the potential distributional shift between p(Ttarget) and p(Tvalid) especially in few-shot transfer learning where it can be arbitrarily large. Estimating the out-of-distribution generalization Acctarget in Meta-Learning thus requires some minimal amount of information about p(Ttarget). However, the few-shot paradigm severely restricts the availability of data from p(Ttarget). The support examples from target tasks are accessible, but the model doesn’t control how many new tasks will actually be presented, there could be several thousands or very few. However, if there is a need to early-stop and generalize to some target task distribution p(Ttarget), then the model will need to solve, at the very least, a single task from p(Ttarget), and thus has access to at least a single support set S. We thus propose to only use a few examples, typically the support set of a single new task (e.g. 5 images). This also implies that any algorithm estimating the optimal early-stopping time t∗ should have a very low sample-wise (and task-wise) variance for its estimate of t∗. 2.2 CAN NEURAL ACTIVATION DYNAMICS FOR A FEW TARGET INPUTS ALLOW US TO MAKE INFERENCES ABOUT GENERALIZATION? . In this work we search for an observable property of deep neural networks that can help us make inferences about meta-learning generalization to a given target problem as training time t progresses. We thus hypothesize the existence of a function ψ of f(x; θt) and p(Ttarget) such that ψ(f, p(Ttarget), t) ∝ Acctarget(t), and set on to find ψ. More specifically, we want to estimate t∗ = argmaxtAcctarget(t) using only a few target examples (a single support set) when approximating ψ. To support a general statement on generalization in Meta-Learning and the nature of ψ, we conduced experiments across a wide range of meta-learning settings. We used different meta-learning algorithms, three of the most pivotal ones of the field : MAML (Finn et al., 2017), Prototypical Networks (Snell et al., 2017), and Matching Networks (Vinyals et al., 2016). We considered both the few-shot learning and few-shot transfer learning regimes, with 1-shot and 5-shot experiments, and various few-shot datasets for p(Ttrain) and p(Ttarget), such as MiniImagenet and Omniglot, but also many others included in Meta-Dataset (Triantafillou et al., 2020). We also used different architectures : the standard 4-layer CNN proposed by (Vinyals et al., 2016), as well as a ResNet as used in (Triantafillou et al., 2020). For full experimental details, refer to Appendix A. Here we present the experimental results that progressively suggest that variation of generalization can be efficiently estimated from simple metrics on the neural activation dynamics : Observation 1: For a deep neural network, the variation of target generalization, as a function of training time, frequently correlates with simple statistics characterizing how its feature extractor responds to the target input distribution: In many meta-learning settings we observed that Acctarget is proportional to relatively simple metrics (denoted as ψ1). One such metric is the expected inner product between representations: ψ1(ϕ(X)) . = Exi,xj∼p(x)[ϕ(xi) Tϕ(xj)], (2) where ψ1 is measured at the output of the feature extractor ϕ, where f(x) = g(ϕ(x)), and captures both the similarity among representation vectors and their norm. Moreover, we measure ψ1 on the representations of the target inputs Xtarget, before adapting the model to new tasks. This relation seems approximately independent of the target class identities Ytarget, and predominantly depends on how the feature extractor represents the marginal distribution over the input Xtarget of the target problem, i.e. ψ1(ϕ(Xtarget), t) ∝ Acctarget(t). Example in Fig.4, complete results in App.B.2.1. Figure 4: Average target task accuracy as a function of training iteration: Acctarget(t). Observation 1: The variation of generalization (Acctarget), for a deep neural network, frequently correlates with simple statistics characterizing how its feature extractor ϕ responds to the target input distribution p(xtarget). For example, here we show ψ1, which is simply the expected inner product between individual representation vectors, which follows the same trend as Acctarget(t) and peaks roughly at the same time. Here ψ1 is vertically rescaled to match the range of the target accuracy. See App.B.2.1 for full experiments across multiple settings. Observation 2: A simple statistic on the neural activations, if computed at the right layer of a network, can often strongly correlate with generalization, but this layer may change depending on the setting. In a deep neural network a feature extractor is composed ofL hidden-layers: ϕ(x) = (ϕL◦ϕL−1◦...◦ϕ1)(x). In many settings, ψ1 measured at the last layer ϕL isn’t proportional to Acctarget, but the relation instead occurs at a lower layer ϕl. Thus, rather than just examining the last layer representation dynamics, we often need to consider the neural activations of the whole feature extractor (See Fig. 3 and Eq. 3). More precisely, we shall consider the evolution throughout time or the neural activation dynamics of a network, i.e.: ψ(Φ(Xtarget, t)) ∝ Acctarget(t), or expressed in the form of Eq. 4. See an example at Fig.5, or App.B.2.2 for full experiments across multiple settings. Φ(X) . = {ϕl(X) | l ∈ [1..L]} (3) Φ(X, t) . = Φ(X | θt) (4) Observation 3: Simple statistics of the activations correlate to generalization, but they may change. One needs to find the right statistic depending on the setting. In our experiments we observe that for many settings Acctarget(t) doesn’t consistently correlate with ψ1(ϕl(Xtarget)) at any specific layer l. From this we conjectured that perhaps ψ1 is a special case in a more general function space Ψ, a hypothesis space or set of functions that predict generalization, or more formally : Ψ .= {ψ | ψ(Φ(Xtarget, t)) ∝ Acctarget(t)} where |Ψ| > 1. From this perspective we ultimately care about finding the function ψ in Ψ that, given the meta-learning setting involved, minimizes the true objective d defined below : d = max t (Acctarget(t))−Acctarget(argmax t ψ(Φ(Xtarget, t))) (5) The natural question that follows is, what may be the characteristics of such function space Ψ ? We address this by formulating a few inductive biases and assumptions which then inform our subsequent experiments. We first note that the complexity of Ψ must be large enough so that, in most meta-learning settings in the few-shot regime, Ψ contains a good solution function ψ∗ such that d is low. The complexity shouldn’t be too large either, since to find ψ∗ we will optimize an indirect empirical objective d̂ (Sec.3). This is especially important in few-shot transfer learning. Furthermore, since Φ(Xtarget) itself has a probability distribution, our hypothesis space Ψ should be a set of functions ψ that are sample estimators of some population statistics of the distribution of Φ(Xtarget). However, since we only have access to a very few samples, those statistics should be relatively simple so as to keep down the standard error of their estimators. We propose to use descriptive statistics based on moments, and limit them up to the second-order (higher-order moments are harder to estimate accurately). Finally, since we ultimately need to find a one-dimensional curve ψ∗(t) to compare to Acctarget(t), our hypothesis space Ψ should contain scalar-valued functions, which we get by computing moments on norms of the activation vectors. In our experiments we observe that when Acctarget(t) doesn’t consistently correlate with ψ1(ϕl(Xtarget)) at any specific layer l, it does typically correlate with one of the following alternative metrics : the norm of activations ψ2; the dispersion of activations ψ3; or the feature-wise variance of activations ψ4. We have observed that generalization sometimes actually correlate with the negative (i.e. −ψ) of either of ψ1 to ψ4. ψ2 . = Ex[‖ϕl(x)‖22] (6) ψ3 . = Ex[‖ϕl(xi)−ϕl(xj)‖22] (7) ψ4 . = Ex[Vark(ϕl(xi,k))] (8) All the metrics ψ1 to ψ4 and their negatives can actually be expressed by a linear combination of the following moments (assuming ReLU activation functions) : m1 = 1 n n∑ i=1 ‖ϕl(xi)‖21 m2 = 1 n n∑ i=1 ‖ϕl(xi)‖22 m3 = 1 n2 n∑ i=1 n∑ j=1 ‖ϕl(xi)− ϕl(xj)‖22 (9) such that those moments define the function space Ψ = {ψ(ϕl(X);w) | w ∈ R3, l ∈ [1..L]} where ψ(ϕl(X);w) = w1m1 + w2m2 + w3m3 and w = [w1, w2, w3] ∈ R3. This parametric function space Ψ, while being relatively simple, can express a variety of properties of activations, such as their norm, dispersion, feature-wise variance, inner product, positively or negatively, or even a combination of properties. In Tab.3 of App. B.3 we have experimentally verified that Ψ has enough complexity to contain a good solution function ψ∗, for many meta-learning settings, both in few-shot learning and few-shot transfer learning.Observation 4: The Variation of generalization can be estimated by using just a few target input examples: Given a function ψ(ϕl(Xtarget)) which correlates with generalization Acctarget(t). when ψ is measured on the activation dynamics for just a single unlabelled support set S, the estimated early-stopping time t∗ψ = argmaxt ψ(t) typically shows very low variance with respect to which task is used for the estimation (Fig. 7). We conjecture that this might be due lack of dependency of ψ on Ytarget, where ψ a more general property of the activations for p(xtarget). This makes early-stopping from such function ψ practical. 3 INFERRING WHICH FUNCTION OF THE NEURAL ACTIVATION DYNAMICS CORRELATES TO GENERALIZATION, AND AT WHICH LAYER TO MEASURE IT Our results in Sec.2.2 suggest that in Meta-Learning there exists a function ψ that, when measured on the neural activation dynamics Φ(Xtarget, t), closely relates to the target generalization Acctarget(t). However since this function is not unique and depends on the meta-learning setting involved (metalearning algorithm, neural architecture, training and target distributions, etc), we propose to cast the discovery of ψ as a machine learning problem. See Fig. 8, which schematizes our framework. At this point we know that, given a meta-learning setting, our function space Ψ should contain a good solution ψ∗ such the true objective d is low. Now we need a way to actually find ψ∗. We can do so by optimizing an indirect, empirical objective d̂, defined below. 3.1 Few-Shot Learning (FSL) : Inferring ψ∗ and l∗ from the validation dynamics and accuracy In few-shot learning, novel tasks from p(Ttarget) involve previously unseen classes but the input domain of Xtarget can be assumed to be similar to that of Xtrain, and therefore to that of Xvalid. We thus use the dynamics Φ(Xvalid, t) and the validation accuracy Accvalid (as a proxy for Acctarget) in order to learn the optimal function ψ∗ and the layer l∗ where it should be measured, and we do so by minimizing the empirical objective d̂FSL (Eq.10). We then compute our actual early-stopping time estimate t̂∗FSL when ψ ∗(ϕl∗(Xtarget, t)), measured on the few support input examples of a single target tasks, reaches its peak (Eq.11). d̂FSL = max t Accvalid(t)−Accvalid(argmax t ψ(ϕl(Xvalid, t);w))) (10) t̂∗FSL = argmax t ψ(ϕl∗(Xtarget, t);w ∗) where w∗, l∗ = argmin w,l d̂FSL (11) 3.2 Few-shot transfer learning (FSTL) : Meta-overfitting often happens when the target dynamics diverge from those of the source input domain When the target problem is from an entirely new dataset, we can’t use Accvalid as a proxy for Acctarget, and we need another objective function to learn ψ∗. However, we can learn ψ∗ by analyzing Φ(Xtarget, t), the neural activation dynamics of the target domain, and comparing them with Φ(Xvalid, t). Assume that for a given target problem, optimal generalization doesn’t happen at the same time as for the source domain, i.e., t∗ 6= t∗valid, and more precisely, assume t∗ < t∗valid. Typically, a generalization curve is generally increasing between t0 and its maximum, whereas it is generally decreasing after the maximum. This implies that the curves of Acctarget(t) and Accvalid(t) are positively correlated between t0 and t∗, as they are both increasing, whereas they are negatively correlated between t∗ and t∗valid, since Acctarget(t) is decreasing while Accvalid(t) is still increasing. In a sense, the two generalization behaviors “diverge” at t∗, since at that moment their correlation goes from positive to negative (See Fig.9a). Since here we assume the neural activation dynamics can characterize the generalization behavior of a model, we conjecture that Φ(Xtarget, t) and Φ(Xvalid, t) might also “diverge” at t∗, under some function ψ(ϕl∗(X, t);w∗), such that the sample Pearson correlation r, of ψ(Φ(Xtarget, t),w∗) and ψ(Φ(Xvalid, t),w∗) also goes from positive to negative near t∗ (See Fig.9b). Our experiments indeed suggest that functions ψ exhibiting more divergence are more likely to capture generalization to the target problem. This analysis can be found in App.***. We thus search for the weights w∗ and hidden-layer l∗ so as to observe the most negative correlation between ψ(ϕl(Xtarget, t);w) and ψ(ϕl(Xvalid, t);w) in the time interval [t0, t∗valid] (Eq.12). We then estimate t∗ by finding the time t̂∗FSTL when ψ ∗ target(t) and ψ ∗ valid(t) diverge (Eq.13). See Fig.10a,10b for a demonstration. d̂FSTL = r(ψtarget(t), ψvalid(t)) = ∑ t(ψtarget(t)− ψ̄target(t))(ψvalid(t)− ψ̄valid(t))∑ t(ψtarget(t)− ψ̄target(t))2 ∑ t(ψvalid(t)− ψ̄valid(t))2 (12) t̂∗FSTL = argmax t ( t× r ( ψ∗target, ψ ∗ valid, [t0, t < t ∗ valid] )) (13) with shorthand notations ψtarget(t) . = ψ(ϕl(Xtarget, t);w) and ψvalid(t) . = ψ(ϕl(Xvalid, t);w) and ψ̄(t) denotes an average over t, and ψ∗ .= ψ(ϕl∗(·);w∗). Here again we minimize an empirical objective, and w∗, l∗ = argminw,l .d̂FSTL. 4 RELATED WORK In recent years, some works have started to analyze theoretical aspects of gradient-based metalearning. (Finn et al., 2019) examine the online Meta-Learning setting, where in online learning the agent faces a sequence of tasks, and they provide a theoretical upper bound for the regret of MAML. (Denevi et al., 2019) study meta-learning through the perspective of biased regularization, where the model adapts to new tasks by starting from a biased parameter vector, which we refer in this work as the meta-training solution. For simple tasks such as linear regression and binary classification, they prove the advantage of starting from the meta-training solution, when learning new tasks via SGD. They use an assumption on the task similarity where the weight vectors parameterizing the tasks are assumed to be close to each other. Working in the framework for Online Convex Optimization where the model learns from a stream of tasks, (Khodak et al., 2019) make an assumption that the optimal solution for each task lies in a small subset of the parameter space and use this assumption to design an algorithm such that the “Task-averaged-regret (TAR)” scales with the diameter of this small subset of the parameter space, when using Reptile (Nichol et al., 2018), a first-order meta-learning algorithm. Bearing a stronger relation to our approach, (Guiroy et al., 2019) empirically study the objective landscapes of gradient-based meta-learning, with a focus on few-shot classification. They notably observed that average generalization to new tasks appears correlated with the average inner product between their gradient vectors. In other words, as gradients appear more similar in inner product, the model will, on average, better generalize to new tasks, after following a step of gradient descent. More recently, a few works have studied the properties of the feature extractor ϕ in the context of Meta-Learning. Notably, the authors of (Raghu et al., 2019) showed empirically that when neural networks adapting to novel task, in the few-shot setting with MAML and MiniImagenet, the feature extractor network is approximately invariant, while the final linear classifier undergoes significant functional changes. They then performed experiments where ϕ is frozen at meta-test time, while only the classifier g is fine-tuned, and observed very similar generalization performance to the regular fine-tuning procedure. Intuitively, these results suggest that the variation, of generalization along meta-training time t, might be predominantly driven by some evolving but unknown property of the feature extractor. The authors of (Goldblum et al., 2020) observed that generalization in few-shot learning was related to how tightly embeddings from new tasks were clustered around their respective classes. However, the authors of (Dhillon et al., 2019) observed that the embeddings at the output of ϕL were poorly clustered around their classes, but that clustering was important when measuring the logit outputs of g. This is similar to what the authors (Frosst et al., 2019) observed when dealing with new Out-of-Distribution examples. This suggests that if generalization is related to a property of the feature extractor, this property might be class agnostic. This is also something that we observed in our very early experiments (expected inner product between representation vectors strongly correlated with generalization, irrespective of taking the class identities into account). But in our work we observed that this property might not only depend on the output of the feature extractor. Earlier works demonstrated that in transfer learning, intermediate layers of ϕ might be critical in the ability of the model to transfer knowledge (Yosinski et al., 2014). 5 EARLY-STOPPING FOR META-LEARNING BY ANALYZING THE NEURAL ACTIVATION DYNAMICS OF A FEW TARGET INPUT EXAMPLES Here we present experimental results on the performance of our early-stopping method. For each experiment, we only use the unlabelled input examples from the support set of a single target task to evaluate the neural activation dynamics. At the beginning of an experiment, we thus randomly sample a task Ti from p(Ttarget) and only keep its set of support input examples. We repeat the experiment for multiple (50) independently and identically distributed support sets from p(Ttarget), and take the average performance. Each such experiment is then repeated for 5 independent training runs. As a baseline for comparison, we use the validation early-stopping approach. Since ψ1 to ψ4 work in practice, we will use them as our function space Ψ but the method that we develop applies as well to the continuous function space defined above, and we present some experimental results in App.B.5 where we apply our early-stopping method with the continuous function space. We begin by demonstrating our proposed early-stopping method in few-shot transfer learning, across various target dataset, and present the results in Tab.1. We use the standard 4-layer CNN architecture, with MAML, trained on MiniImagenet 5-way 1-shot. When the target dataset is Omniglot, the performance of the validation baseline (51%) is significantly lower than the optimal generalization (76%) presumably because of the distributional shift between MiniImagenet and Omniglot. In such scenario our method appears to offer a significant advantage over the baseline, since we obtain 75% in target accuracy, quite close to the optimal generalization. In scenarios where the target domain is arguably more similar to that of the source domain, e.g. transfer from MiniImagenet to Imagenet, the early-stopping from the validation accuracy yields a performance (35.0%) closer to the optimal generalization (35.6%), and in such case our method performs only slightly worse (34.8%) than the validation baseline. We observe a similar trend when the model is trained on the Quickdraw dataset : When transferring to Omniglot, the validation baseline leads to sub-optimal generalization, but estimating the target accuracy from the neural activation dynamics allows us to halt the training close to the optimal time. When transferring to Traffic Sign, the baseline performance yields reasonable performance, and our method is roughly on par with it. From this point, we will focus on settings where there is a significant gap in performance between the validation baseline and optimal generalization, for example the transfer from Birds to Quickdraw, the present in our illustration of Sec.2.1. Next we present similar experiments with two other meta-learning algorithms : Prototypical Networks and Matching Networks, which are shown in Tab.2. 6 CONCLUSION In this work we have presented empirical evidence that the overfitting point of Meta-Learning for deep neural networks for few-shot classification can often be estimated from simple statistics of neural activations and how they evolve throughout meta-training time. Our results suggest that key properties, or statistics of how feature extractors respond to the target input distribution can be found which are simple enough to be estimated from just a few unlabelled target input examples. However, the specific function of the activations, and the layer at which to measure them, need to be inferred. We demonstrate that these functions and layers of interest can be inferred and used to guide early stopping – leading to a new, and effective method for early stopping which represents a significant departure for the de facto standard practice of using a validation set. In few-shot learning these ingredients can be inferred from how the neural activation dynamics of the validation data relate to the validation accuracy. In few-shot transfer learning, they are inferred through searching for which function (in a given function space) and at which layer, that the activation dynamics of the target input domain “diverge” the most from those of the source domain. Finally, we have demonstrated how this approach can be used to optimize for target generalization in practice to perform early-stopping and thus improve overall generalization to distributions of novel few-shot classification tasks, while only using unlabelled support examples from a single target task. A EXPERIMENTAL DETAILS CNN : We use the architecture proposed by Vinyals et al. (2016) which is used by Finn et al. (2017), consisting of 4 modules stacked on each other, each being composed of 64 filters of of 3 × 3 convolution, followed by a batch normalization layer, a ReLU activation layer, and a 2 × 2 max-pooling layer. With Omniglot, strided convolution is used instead of max-pooling, and images are downsampled to 28 × 28. With MiniImagenet, we used fewer filters to reduce overfitting, but used 48 while MAML used 32. As a loss function to minimize, we use cross-entropy between the predicted classes and the target classes. ResNet-18 : We use the same implementation of the Residual Network as in (Triantafillou et al., 2020). For most of the hyperparameters, we follow the setup of (Triantafillou et al., 2020), but we set the main few-shot learning hyperparameters so as to follow the original MAML setting more closely, and in each setting, we consider a single target dataset at a time, with a fixed number of shots and classification ways. We use 5 steps of gradient descent for the task adaptations, 15 shots of query examples to evaluate the test accuracy of tasks. We don’t use any learning rate decay during meta-training, and step-size of 0.01 when finetuning the models to new tasks. Datasets : We use the MiniImagenet and Omniglot datasets, as well as the many datasets included in the Meta-Dataset benchmark (Triantafillou et al., 2020). B COMPLETE EXPERIMENTAL RESULTS B.1 THE ISSUE OF USING A VALIDATION SET FOR EARLY-STOPPING IN META-LEARNING B.2 THE RELATION BETWEEN THE NEURAL ACTIVATION DYNAMICS AND GENERALIZATION TO NOVEL TASKS B.2.1 RELATION BETWEEN THE REPRESENTATION SPACE OF THE FEATURE EXTRACTOR AND TARGET GENERALIZATION Here we present experimental results to support the observation 1 that we make in 2.2, showing that the variation of generalization along meta-training time can be captured by a function of the neural activation dynamics that is independent of class labels. B.2.2 NEURAL ACTIVATION DYNAMICS : DIFFERENT LEVELS OF THE FEATURE EXTRACTOR CAN REVEAL THE VARIATION OF GENERALIZATION B.2.3 DIFFERENT FUNCTIONS OF THE NEURAL ACTIVATION DYNAMICS CAN REVEAL THE VARIATION OF GENERALIZATION By expending the experimental setup further, we observed instances where a given metric had strong correlation with generalization but in a negative sense, i.e. that it was actually its argmin that coincided with optimal early-stopping time t∗. See Fig. 14 for examples of this phenomenon. We later observed that other statistical estimators can correlate with generalization. 0 10 20 30 40 50 t (iters) 0.54 0.56 0.58 0.60 0.62 0.64 0.66 Ac cu ra cy expected inner product expected square L2 norm expected square L2 dispersion Acctarget (a) Exclusive correlation between a specific metric and generalization 0 25000 50000 75000 100000 125000 150000 175000 t (iters) 0.30 0.35 0.40 0.45 0.50 0.55 Ac cu ra cy Acctarget(t) (t) (b) Expected l2 Norm 0 20000 40000 60000 80000 100000 120000 140000 t (iters) 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 Ac cu ra cy Acctarget(t) (t) (c) Expected l2 Dispersion 0 20000 40000 60000 80000 t (iters) 0.0 0.2 0.4 0.6 0.8 1.0 Ac cu ra cy Acctarget(t) (t) (d) Expected Feature-Wise Variance Figure 15: Different metrics of the representation space may have strong correlation with generalization, other than the expected inner product of Eq. 2). a) Prototypical, VGG Flower, 5-way 1-shot : out of three metrics which in other cases may be related with generalization (as in b), c), d) and Sec. ??), here only the expected l2 dispersion has a strong relation with generalization. b) Expected l2 norm (Eq. 6); c) Expected square l2 dispersion (Eq. 7, Prototypical Network, VGG Flower; d) Expected feature-wise variance (Eq. 8), Prototypical Network, Omniglot to Quickdraw. These results motivate our approach of considering a family of functions Ψ in which we must find the optimal function ψ∗ given the setting, rather than trying to discover a single universal metric that would correlate to generalization in all scenarios. Even if such metric exists, it may not be estimated with enough efficiency to satisfy the requirement of using only a single support set to estimate t∗. B.2.4 FUNCTIONS OF THE NEURAL ACTIVATION DYNAMICS : TASK-WISE VARIANCE OF THE ESTIMATE t∗ψ Here we present empirical results on the task-wise variance as discussed in the observation 4 of Sec. 2.2. We begin by showing the task-wise variance for few-shot accuracy when evaluated with a single target task and assuming access to the query examples (15 shots). Few-shot accuracy exhibits a high variance as different tasks will peak at much different times, making it unfit to estimate t∗. On the other hand, for the metrics from Sec. 2.2, which are based on small order statistics (mean and variance) the estimated early-stopping time exhibits drastically lower variance. See Fig. 16 for an example, where we use MAML in few-shot learning (5-way 1-shot) with the Aircraft dataset, and where we use the expected square l2 norm for the metric. As we can see, in Fig. 16, measuring the metric on different tasks merely offsets the response curve but bears almost no change on the trend of the curve itself. This also relates to our assumption that the variation of target generalization in Meta-Learning might be linked to a function of the neural activation dynamics that is class agnostic. B.3 CAPACITY OF THE CONTINUOUS FUNCTION SPACE Ψ (DEFINED BY THE THREE MOMENTS OF EQ.9) TO CONTAIN GOOD SOLUTIONS ψ∗ The momentsm1,m2 andm3 of Eq.9 define the parametric function space Ψ = {ψ(ϕl(X);w) | w ∈ R3, l ∈ [1..L]} where ψ(ϕl(X);w) = w1m1 + w2m2 + w3m3 and w = [w1, w2, w3] ∈ R3. This parametric function space Ψ. Here we have experimentally observed that Ψ has enough complexity to contain a good solution function ψ∗, for different meta-learning settings, both in few-shot learning and few-shot transfer learning, as shown in Tab.3. B.4 LEARNING ψ∗ IN FEW-SHOT TRANSFER LEARNING B.4.1 FINDING w∗ AND l∗ WHERE ψtarget(t) AND ψvalid(t) “DIVERGE” THE MOST As we added more experimental settings for Few-Shot Transfer Learning, we observed instances where, for a given metric measured at the representation space ϕL, there was no strong link with generalization, but when measuring the metric at lower hidden-layers than ϕL, then we observed a strong correlation with generalization. We illustrate this in Fig. 17. Theses results motivated our approach of considering the whole neural activation dynamics (all layers), rather than only those the final layer of the feature extractor alone, in our search for functions linked to generalization. Then in Fig.18 we conducted a more systematic analysis, but concerned with identifying the right functions ψ∗, with results suggesting that functions ψ showing stronger “divergence” (negative correlation) between the target and validation dynamics, will more likely lead to a higher target accuracy if we stop at their peak time. (a) Transfer : Omniglot to MiniImagenet, MAML. The critical depth, i.e. the one where measuring the expected inner product (here marked as RIP) predicts generalization on the target domain, is at layer 1, even though the critical depth for the source domain was at layer 4. (b) Transfer : MiniImagenet to Omniglot, MAML. The critical depth for the target domain is at layer 4, the same as for the source domain. Figure 17: Generalization can correlate to a metric at different levels of neural activations. Here the critical layer l∗ (squared in red) is identified by searching for the highest divergence between the validation and target neural activation dynamics. Correlation between D(ψtarget, ψtarget, ) and Generalization MAML Quickdraw ↓ Omniglot Prototypical Network Omniglot ↓ Quickdraw Matching Network MiniImagenet ↓ Omniglot 0.82 0.75 0.81 Table 4: Correlation betweenD(ψtarget, ψtarget, ) and Generalization, for different few-shot learning settings. The correlation is computed as in the analysis of Fig. 18c. The results show that functions exhibiting high divergence between the validation and target neural activation dynamics are likely to lead to good generalization performance on the target distribution. B.5 EVALUATING THE PERFORMANCE OF OUR EARLY-STOPPING METHOD WHEN USING THE CONTINUOUS FUNCTION SPACE Ψ DEFINED BY THE THREE MOMENTS OF EQ.9 Here we present a few experimental results where we apply our early-stopping methof in the continuous function space Ψ. Since there are only three weights to tune, namely w1, w2 and w3, we don’t suffer from the curse of dimensionality, which is the classic motivation for using gradient-based optimization of neural networks with many parameters. This allows for a search based optimization of w. 0.00 0.05 0.10 0.15 0.20 0.25 D( target, valid) 0.74 0.76 0.78 0.80 0.82 0.84 0.86 Pe rfo rm an ce Corr. coeff. R= 0.422 P-Value= 0.0 (a) Functions with high divergence between valid and target dynamics are more likely to achieve higher generalization 0.00 0.05 0.10 0.15 0.20 0.25 Avg Divergence (10 bins) 0.76 0.78 0.80 0.82 0.84 0.86 Av g Pe rfo rm an ce (1 0 bi ns ) (b) Average divergence vs. average performance 0.00 0.05 0.10 0.15 0.20 0.25 Avg Divergence (10 bins) 0.80 0.82 0.84 0.86 Av g Pe rfo rm an ce (1 0 bi ns ) Corr. coeff. R= 0.8255 P-Value= 0.0061 (c) Strong correlation between average divergence and average performance 0 5 10 15 20 25 30 35 t (iter) 0.75 0.80 0.85 0.90 0.95 1.00 Ac cu ra cy * target * valid Acctarget (d) Solution function measured on the valid and target examples Table 6: Performance of our method - Few-Shot Transfer Learning, Prototypical Network Algorithm Matching Network Source dataset MiniImagenet Target dataset Omniglot Baseline 73.75% Our method 75% Table 7: Performance of our method - Few-Shot Transfer Learning, Matching Network
1. What is the main contribution of the paper regarding meta-learning training regimes? 2. What are the strengths of the proposed approach, particularly in addressing a practical problem in meta-learning training setups? 3. What are the concerns regarding the empirical nature of the proposed method and its computational cost? 4. Are there any other possible ways to analyze the domain gap between meta-train and test tasks? 5. How does the reviewer suggest improving the proposed method, particularly in reducing the search space? 6. Is there a potential baseline that the authors have missed, which could be used as a comparison to the proposed method?
Summary Of The Paper Review
Summary Of The Paper In this paper, the authors address an important but under-explored area of meta-learning training regime which is to identify when to early-stop during meta-training such that it generalizes better to few-shot test tasks. The authors propose multiple such metrics to do this (justified empirically) and show that their method can outperform the current practice of early-stopping based on the average performance on a set of validation tasks. Review Strengths: The authors address an important practical problem of a meta-learning training setup which so far has not garnered enough attention -- how to decide what is the best model to save during meta-training such that it generalizes well to few-shot test tasks or in other words, how to do early stopping effectively. This is not a problem in a standard supervised training setup because both the class and the underlying data distribution does not change between train and test, which is not the case for a meta-learning training. The authors design multiple such metrics to determine the time t during training which produces the best model (most likely). Authors also show that these metric(s) can outperform the current practice of saving the best model using average performance of the validation tasks, at times significantly if there is a large domain-gap between meta-train and test. I like the presentation of the paper -- how the authors have started from defining a simple metric (inner-product of last-layer activations on a test task) and progressively improved it to handle different scenarios and finally handling the case when the data distribution is entirely different between train and test. I also liked the diversity of datasets chosen for the experiments which could show why the simple metric was not sufficient and why modifying that metric was important. Overall, this paper provides a useful tool for meta-learning training and I find it has the potential to be widely adopted if some concerns around it can be mitigated (see my points below). Areas of Concern: The whole paper is based on conjectures and solutions coming out from running a set of experiments. There is not a lot of theoretical or intuitive justification as to why a particular metric is working for a method/dataset and why it is not working for others. For example, the authors mention that for some datasets, the information coming out from the last layer features are the deciding factor while for some, it is not the last layer features but some intermediate features. The paper does not explain why that is happening - is it because the domain-gap between the train and test set such that for one, the features are common until the last layer and for some, the features diverge at some early layers? Similarly, at Fig 6, it shows a particular metric is good for MAML/CNN/Aircraft whereas another metric is good for Proto-Nets/CNN/VGG. The fact that the proposed method is predominantly empirical creates a doubt in my mind as to how do we know that these set of metrics are enough/exhaustive? I'd have to liked to see a more thorough analysis between domain-gap (quantifiable if possible) between meta-train/test and then using that to decide which metric would be best rather than trying out all possible metrics. Following up on the previous point, it also creates a question regarding the computational cost of this method to pick the best early-stoping time t . If I understand the implementation correctly, to pick the best layer for which the product of activations correlate best with the test accuracy, we need to run it for all the layers of a given network and then pick the best. It means for a deep network like ResNet-101, we need to run it for all the 101 layers? And these are not the only set of metrics that the papers consider. Therefore, I am not sure if this method is practically feasible given its high computational demand during meta-training. To reiterate, using some other way of analyzing the domain-gap between train and test might help to reduce the search-space significantly. Finally, there is a simple baseline that I think the authors have missed. As the authors are using a randomly sampled task from the test distribution, how about using the performance of the model (during meta-training) on this task (rather than the average on the validation tasks) as the proxy to save the best model? How would this compare to the method(s) proposed in the paper? I might be wrong but I believe it will work non-trivially better than the validation baseline especially when there is a large domain-gap.
ICLR
Title Early-Stopping for Meta-Learning: Estimating Generalization from the Activation Dynamics Abstract Early-stopping, a fundamental element of machine learning practice, aims to halt the training of a model when it reaches optimal generalization to unseen examples, right before the overfitting regime on the training data. Meta-Learning algorithms for few-shot learning aim to train neural networks capable of adapting to novel tasks using only a few labelled examples, in order to achieve good generalization. However, current early-stopping practices in meta-learning are problematic since there may be an arbitrary large distributional shift between the meta-validation set coming from the training data, and the meta-test set. This is even more critical in few-shot transfer learning where the meta-test set comes from a different target dataset. To this end, we empirically show that as meta-training progresses, a model’s generalization behaviour on a target distribution of novel tasks can be estimated by analysing the dynamics of its neural activations. We propose a method for estimating optimal early-stopping time from the neural activation dynamics of just a few unlabelled support examples from the target distribution, and we demonstrate its performance with various meta-learning algorithms, few-shot datasets and transfer regimes. 1 INTRODUCTION Deep Learning research has been successful at producing algorithms and models that, when optimized on a distribution of training examples, generalize well to previously unseen examples drawn from that same distribution. Meta-Learning is in a way, a natural extension of this aim, where the model has to generalize to not only new data points, but entirely new tasks. Important practical progress has been made in this direction over the past few years. Yet it remains sparsely understood what are the underlying phenomena behind the transitioning of a neural network’s generalization to novel tasks, from the underfitting to the overfitting regime, with the optimal generalization happening in between. Early-stopping, a fundamental element of machine learning practice, maximizes generalization by aiming to halt the training at the frontier between those two regimes, when generalization is optimal. It is computed on a validation set, made of held out examples from the training data, which serves as a proxy for the test data. As a regularizer, ”Early-stopping should almost be used universally. [...] It is probably the most commonly used form of regularization in deep learning. [...] a very unobtrusive form of regularization, in that it requires almost no change in the underlying training procedure” (Goodfellow et al., 2016). However in meta-learning, implementing early-stopping is problematic since there may be an arbitrarily large distributional shift between the meta-validation tasks (drawn from the training data) and the meta-test tasks. Moreover, meta-learning typically involves learning a new task from very few labelled examples, too few to allow constituting a validation set from it. In this work, we study the relation between generalization in Meta-Learning and neural activation dynamics : Given a neural network and a set of input examples, the network’s responses measured at all of its hidden-layers are what we define as the neural activations, and the evolution of those responses during the learning time (meta-training) is what we define as the neural activation dynamics. The main contributions of our work can be summarized as follows : 1. We empirically show that in Meta-Learning, a simple function of the neural activation dynamics, for just a few unlabelled target examples, can reveal the variation of generalization to a distribution of novel target tasks (Sec.2.2), and how this function can be learned (Sec.3). 2. We propose a novel method for early-stopping in Meta-Learning, applied in many settings of Few-Shot Learning and Few-Shot Transfer Learning (Sec.5). 2 META-LEARNING AND FEW-SHOT CLASSIFICATION Meta-Learning algorithms generally aim to train a model f(x; θ) on a set of source problems, often presented as a distribution over tasks p(Ttrain), in such a way that the model is capable of generalizing to new, previously unseen tasks from a target distribution p(Ttarget). When applied to classification, meta-learning has often been formulated in the past by defining a task T that involves the m-way classification of input examples x among m distinct classes. The tasks from p(Ttrain) and p(Ttarget) are made of classes drawn from two disjoint sets Ctrain and Ctarget. A novel task thus involves new classes not seen during training. In few-shot learning, the inputs x of the training and target tasks come from a same input distribution p(x) (e.g., an image dataset) but conditioned on their respective classes, i.e. p(xtrain) = p(x|y ∈ Ctrain) and p(xtarget) = p(x|y ∈ Ctarget). The few-shot aspect means that for a given novel task Ttarget, only a very few labelled examples are available, typically k examples per class, and the model uses this support set of examples S = {(x,y)}1..k to adapt its parameters θ to the task, then its accuracy is evaluated on new query examples from Ttarget. The meta-learning generalization Acctarget, for a model f(x; θt) at time t (after t training iterations) to a distribution p(Ttarget), is thus the query accuracy averaged over multiple target tasks: Acctarget . = ETi∼p(Ttarget) [ E(x,y)∼Ti\Si [ 1{argmax(f(x; θit)) = y} ]] (1) where for each new task Ti the adapted solution θit is often obtained by performing T steps of gradient descent (full-batch) on the cross-entropy loss L(f,Si) with respect to θt. In few-shot transfer learning, not only are the class sets Ctrain and Ctarget disjoint, but the marginal p(xtarget) can be arbitrarily different from p(xtrain) (e.g. from a different image dataset). 2.1 EARLY-STOPPING BASED ON VALIDATION SET PERFORMANCE CAN LEAD TO SUB OPTIMAL GENERALIZATION IN META-LEARNING In a standard supervised learning setup, a subset of examples is held out from the training data to constitute a validation set. Since the validation accuracy is a good proxy for the test accuracy, early-stopping is performed by halting training when the validation accuracy reaches its maximum. In Meta-Learning for few-shot classification, the validation set is made of held out classes from the training data to constitute the validation task distribution p(Tvalid), and early-stopping happens at t∗valid=argmaxtAccvalid. But this can lead to a sub-optimal generalization (see Fig.2) because of the potential distributional shift between p(Ttarget) and p(Tvalid) especially in few-shot transfer learning where it can be arbitrarily large. Estimating the out-of-distribution generalization Acctarget in Meta-Learning thus requires some minimal amount of information about p(Ttarget). However, the few-shot paradigm severely restricts the availability of data from p(Ttarget). The support examples from target tasks are accessible, but the model doesn’t control how many new tasks will actually be presented, there could be several thousands or very few. However, if there is a need to early-stop and generalize to some target task distribution p(Ttarget), then the model will need to solve, at the very least, a single task from p(Ttarget), and thus has access to at least a single support set S. We thus propose to only use a few examples, typically the support set of a single new task (e.g. 5 images). This also implies that any algorithm estimating the optimal early-stopping time t∗ should have a very low sample-wise (and task-wise) variance for its estimate of t∗. 2.2 CAN NEURAL ACTIVATION DYNAMICS FOR A FEW TARGET INPUTS ALLOW US TO MAKE INFERENCES ABOUT GENERALIZATION? . In this work we search for an observable property of deep neural networks that can help us make inferences about meta-learning generalization to a given target problem as training time t progresses. We thus hypothesize the existence of a function ψ of f(x; θt) and p(Ttarget) such that ψ(f, p(Ttarget), t) ∝ Acctarget(t), and set on to find ψ. More specifically, we want to estimate t∗ = argmaxtAcctarget(t) using only a few target examples (a single support set) when approximating ψ. To support a general statement on generalization in Meta-Learning and the nature of ψ, we conduced experiments across a wide range of meta-learning settings. We used different meta-learning algorithms, three of the most pivotal ones of the field : MAML (Finn et al., 2017), Prototypical Networks (Snell et al., 2017), and Matching Networks (Vinyals et al., 2016). We considered both the few-shot learning and few-shot transfer learning regimes, with 1-shot and 5-shot experiments, and various few-shot datasets for p(Ttrain) and p(Ttarget), such as MiniImagenet and Omniglot, but also many others included in Meta-Dataset (Triantafillou et al., 2020). We also used different architectures : the standard 4-layer CNN proposed by (Vinyals et al., 2016), as well as a ResNet as used in (Triantafillou et al., 2020). For full experimental details, refer to Appendix A. Here we present the experimental results that progressively suggest that variation of generalization can be efficiently estimated from simple metrics on the neural activation dynamics : Observation 1: For a deep neural network, the variation of target generalization, as a function of training time, frequently correlates with simple statistics characterizing how its feature extractor responds to the target input distribution: In many meta-learning settings we observed that Acctarget is proportional to relatively simple metrics (denoted as ψ1). One such metric is the expected inner product between representations: ψ1(ϕ(X)) . = Exi,xj∼p(x)[ϕ(xi) Tϕ(xj)], (2) where ψ1 is measured at the output of the feature extractor ϕ, where f(x) = g(ϕ(x)), and captures both the similarity among representation vectors and their norm. Moreover, we measure ψ1 on the representations of the target inputs Xtarget, before adapting the model to new tasks. This relation seems approximately independent of the target class identities Ytarget, and predominantly depends on how the feature extractor represents the marginal distribution over the input Xtarget of the target problem, i.e. ψ1(ϕ(Xtarget), t) ∝ Acctarget(t). Example in Fig.4, complete results in App.B.2.1. Figure 4: Average target task accuracy as a function of training iteration: Acctarget(t). Observation 1: The variation of generalization (Acctarget), for a deep neural network, frequently correlates with simple statistics characterizing how its feature extractor ϕ responds to the target input distribution p(xtarget). For example, here we show ψ1, which is simply the expected inner product between individual representation vectors, which follows the same trend as Acctarget(t) and peaks roughly at the same time. Here ψ1 is vertically rescaled to match the range of the target accuracy. See App.B.2.1 for full experiments across multiple settings. Observation 2: A simple statistic on the neural activations, if computed at the right layer of a network, can often strongly correlate with generalization, but this layer may change depending on the setting. In a deep neural network a feature extractor is composed ofL hidden-layers: ϕ(x) = (ϕL◦ϕL−1◦...◦ϕ1)(x). In many settings, ψ1 measured at the last layer ϕL isn’t proportional to Acctarget, but the relation instead occurs at a lower layer ϕl. Thus, rather than just examining the last layer representation dynamics, we often need to consider the neural activations of the whole feature extractor (See Fig. 3 and Eq. 3). More precisely, we shall consider the evolution throughout time or the neural activation dynamics of a network, i.e.: ψ(Φ(Xtarget, t)) ∝ Acctarget(t), or expressed in the form of Eq. 4. See an example at Fig.5, or App.B.2.2 for full experiments across multiple settings. Φ(X) . = {ϕl(X) | l ∈ [1..L]} (3) Φ(X, t) . = Φ(X | θt) (4) Observation 3: Simple statistics of the activations correlate to generalization, but they may change. One needs to find the right statistic depending on the setting. In our experiments we observe that for many settings Acctarget(t) doesn’t consistently correlate with ψ1(ϕl(Xtarget)) at any specific layer l. From this we conjectured that perhaps ψ1 is a special case in a more general function space Ψ, a hypothesis space or set of functions that predict generalization, or more formally : Ψ .= {ψ | ψ(Φ(Xtarget, t)) ∝ Acctarget(t)} where |Ψ| > 1. From this perspective we ultimately care about finding the function ψ in Ψ that, given the meta-learning setting involved, minimizes the true objective d defined below : d = max t (Acctarget(t))−Acctarget(argmax t ψ(Φ(Xtarget, t))) (5) The natural question that follows is, what may be the characteristics of such function space Ψ ? We address this by formulating a few inductive biases and assumptions which then inform our subsequent experiments. We first note that the complexity of Ψ must be large enough so that, in most meta-learning settings in the few-shot regime, Ψ contains a good solution function ψ∗ such that d is low. The complexity shouldn’t be too large either, since to find ψ∗ we will optimize an indirect empirical objective d̂ (Sec.3). This is especially important in few-shot transfer learning. Furthermore, since Φ(Xtarget) itself has a probability distribution, our hypothesis space Ψ should be a set of functions ψ that are sample estimators of some population statistics of the distribution of Φ(Xtarget). However, since we only have access to a very few samples, those statistics should be relatively simple so as to keep down the standard error of their estimators. We propose to use descriptive statistics based on moments, and limit them up to the second-order (higher-order moments are harder to estimate accurately). Finally, since we ultimately need to find a one-dimensional curve ψ∗(t) to compare to Acctarget(t), our hypothesis space Ψ should contain scalar-valued functions, which we get by computing moments on norms of the activation vectors. In our experiments we observe that when Acctarget(t) doesn’t consistently correlate with ψ1(ϕl(Xtarget)) at any specific layer l, it does typically correlate with one of the following alternative metrics : the norm of activations ψ2; the dispersion of activations ψ3; or the feature-wise variance of activations ψ4. We have observed that generalization sometimes actually correlate with the negative (i.e. −ψ) of either of ψ1 to ψ4. ψ2 . = Ex[‖ϕl(x)‖22] (6) ψ3 . = Ex[‖ϕl(xi)−ϕl(xj)‖22] (7) ψ4 . = Ex[Vark(ϕl(xi,k))] (8) All the metrics ψ1 to ψ4 and their negatives can actually be expressed by a linear combination of the following moments (assuming ReLU activation functions) : m1 = 1 n n∑ i=1 ‖ϕl(xi)‖21 m2 = 1 n n∑ i=1 ‖ϕl(xi)‖22 m3 = 1 n2 n∑ i=1 n∑ j=1 ‖ϕl(xi)− ϕl(xj)‖22 (9) such that those moments define the function space Ψ = {ψ(ϕl(X);w) | w ∈ R3, l ∈ [1..L]} where ψ(ϕl(X);w) = w1m1 + w2m2 + w3m3 and w = [w1, w2, w3] ∈ R3. This parametric function space Ψ, while being relatively simple, can express a variety of properties of activations, such as their norm, dispersion, feature-wise variance, inner product, positively or negatively, or even a combination of properties. In Tab.3 of App. B.3 we have experimentally verified that Ψ has enough complexity to contain a good solution function ψ∗, for many meta-learning settings, both in few-shot learning and few-shot transfer learning.Observation 4: The Variation of generalization can be estimated by using just a few target input examples: Given a function ψ(ϕl(Xtarget)) which correlates with generalization Acctarget(t). when ψ is measured on the activation dynamics for just a single unlabelled support set S, the estimated early-stopping time t∗ψ = argmaxt ψ(t) typically shows very low variance with respect to which task is used for the estimation (Fig. 7). We conjecture that this might be due lack of dependency of ψ on Ytarget, where ψ a more general property of the activations for p(xtarget). This makes early-stopping from such function ψ practical. 3 INFERRING WHICH FUNCTION OF THE NEURAL ACTIVATION DYNAMICS CORRELATES TO GENERALIZATION, AND AT WHICH LAYER TO MEASURE IT Our results in Sec.2.2 suggest that in Meta-Learning there exists a function ψ that, when measured on the neural activation dynamics Φ(Xtarget, t), closely relates to the target generalization Acctarget(t). However since this function is not unique and depends on the meta-learning setting involved (metalearning algorithm, neural architecture, training and target distributions, etc), we propose to cast the discovery of ψ as a machine learning problem. See Fig. 8, which schematizes our framework. At this point we know that, given a meta-learning setting, our function space Ψ should contain a good solution ψ∗ such the true objective d is low. Now we need a way to actually find ψ∗. We can do so by optimizing an indirect, empirical objective d̂, defined below. 3.1 Few-Shot Learning (FSL) : Inferring ψ∗ and l∗ from the validation dynamics and accuracy In few-shot learning, novel tasks from p(Ttarget) involve previously unseen classes but the input domain of Xtarget can be assumed to be similar to that of Xtrain, and therefore to that of Xvalid. We thus use the dynamics Φ(Xvalid, t) and the validation accuracy Accvalid (as a proxy for Acctarget) in order to learn the optimal function ψ∗ and the layer l∗ where it should be measured, and we do so by minimizing the empirical objective d̂FSL (Eq.10). We then compute our actual early-stopping time estimate t̂∗FSL when ψ ∗(ϕl∗(Xtarget, t)), measured on the few support input examples of a single target tasks, reaches its peak (Eq.11). d̂FSL = max t Accvalid(t)−Accvalid(argmax t ψ(ϕl(Xvalid, t);w))) (10) t̂∗FSL = argmax t ψ(ϕl∗(Xtarget, t);w ∗) where w∗, l∗ = argmin w,l d̂FSL (11) 3.2 Few-shot transfer learning (FSTL) : Meta-overfitting often happens when the target dynamics diverge from those of the source input domain When the target problem is from an entirely new dataset, we can’t use Accvalid as a proxy for Acctarget, and we need another objective function to learn ψ∗. However, we can learn ψ∗ by analyzing Φ(Xtarget, t), the neural activation dynamics of the target domain, and comparing them with Φ(Xvalid, t). Assume that for a given target problem, optimal generalization doesn’t happen at the same time as for the source domain, i.e., t∗ 6= t∗valid, and more precisely, assume t∗ < t∗valid. Typically, a generalization curve is generally increasing between t0 and its maximum, whereas it is generally decreasing after the maximum. This implies that the curves of Acctarget(t) and Accvalid(t) are positively correlated between t0 and t∗, as they are both increasing, whereas they are negatively correlated between t∗ and t∗valid, since Acctarget(t) is decreasing while Accvalid(t) is still increasing. In a sense, the two generalization behaviors “diverge” at t∗, since at that moment their correlation goes from positive to negative (See Fig.9a). Since here we assume the neural activation dynamics can characterize the generalization behavior of a model, we conjecture that Φ(Xtarget, t) and Φ(Xvalid, t) might also “diverge” at t∗, under some function ψ(ϕl∗(X, t);w∗), such that the sample Pearson correlation r, of ψ(Φ(Xtarget, t),w∗) and ψ(Φ(Xvalid, t),w∗) also goes from positive to negative near t∗ (See Fig.9b). Our experiments indeed suggest that functions ψ exhibiting more divergence are more likely to capture generalization to the target problem. This analysis can be found in App.***. We thus search for the weights w∗ and hidden-layer l∗ so as to observe the most negative correlation between ψ(ϕl(Xtarget, t);w) and ψ(ϕl(Xvalid, t);w) in the time interval [t0, t∗valid] (Eq.12). We then estimate t∗ by finding the time t̂∗FSTL when ψ ∗ target(t) and ψ ∗ valid(t) diverge (Eq.13). See Fig.10a,10b for a demonstration. d̂FSTL = r(ψtarget(t), ψvalid(t)) = ∑ t(ψtarget(t)− ψ̄target(t))(ψvalid(t)− ψ̄valid(t))∑ t(ψtarget(t)− ψ̄target(t))2 ∑ t(ψvalid(t)− ψ̄valid(t))2 (12) t̂∗FSTL = argmax t ( t× r ( ψ∗target, ψ ∗ valid, [t0, t < t ∗ valid] )) (13) with shorthand notations ψtarget(t) . = ψ(ϕl(Xtarget, t);w) and ψvalid(t) . = ψ(ϕl(Xvalid, t);w) and ψ̄(t) denotes an average over t, and ψ∗ .= ψ(ϕl∗(·);w∗). Here again we minimize an empirical objective, and w∗, l∗ = argminw,l .d̂FSTL. 4 RELATED WORK In recent years, some works have started to analyze theoretical aspects of gradient-based metalearning. (Finn et al., 2019) examine the online Meta-Learning setting, where in online learning the agent faces a sequence of tasks, and they provide a theoretical upper bound for the regret of MAML. (Denevi et al., 2019) study meta-learning through the perspective of biased regularization, where the model adapts to new tasks by starting from a biased parameter vector, which we refer in this work as the meta-training solution. For simple tasks such as linear regression and binary classification, they prove the advantage of starting from the meta-training solution, when learning new tasks via SGD. They use an assumption on the task similarity where the weight vectors parameterizing the tasks are assumed to be close to each other. Working in the framework for Online Convex Optimization where the model learns from a stream of tasks, (Khodak et al., 2019) make an assumption that the optimal solution for each task lies in a small subset of the parameter space and use this assumption to design an algorithm such that the “Task-averaged-regret (TAR)” scales with the diameter of this small subset of the parameter space, when using Reptile (Nichol et al., 2018), a first-order meta-learning algorithm. Bearing a stronger relation to our approach, (Guiroy et al., 2019) empirically study the objective landscapes of gradient-based meta-learning, with a focus on few-shot classification. They notably observed that average generalization to new tasks appears correlated with the average inner product between their gradient vectors. In other words, as gradients appear more similar in inner product, the model will, on average, better generalize to new tasks, after following a step of gradient descent. More recently, a few works have studied the properties of the feature extractor ϕ in the context of Meta-Learning. Notably, the authors of (Raghu et al., 2019) showed empirically that when neural networks adapting to novel task, in the few-shot setting with MAML and MiniImagenet, the feature extractor network is approximately invariant, while the final linear classifier undergoes significant functional changes. They then performed experiments where ϕ is frozen at meta-test time, while only the classifier g is fine-tuned, and observed very similar generalization performance to the regular fine-tuning procedure. Intuitively, these results suggest that the variation, of generalization along meta-training time t, might be predominantly driven by some evolving but unknown property of the feature extractor. The authors of (Goldblum et al., 2020) observed that generalization in few-shot learning was related to how tightly embeddings from new tasks were clustered around their respective classes. However, the authors of (Dhillon et al., 2019) observed that the embeddings at the output of ϕL were poorly clustered around their classes, but that clustering was important when measuring the logit outputs of g. This is similar to what the authors (Frosst et al., 2019) observed when dealing with new Out-of-Distribution examples. This suggests that if generalization is related to a property of the feature extractor, this property might be class agnostic. This is also something that we observed in our very early experiments (expected inner product between representation vectors strongly correlated with generalization, irrespective of taking the class identities into account). But in our work we observed that this property might not only depend on the output of the feature extractor. Earlier works demonstrated that in transfer learning, intermediate layers of ϕ might be critical in the ability of the model to transfer knowledge (Yosinski et al., 2014). 5 EARLY-STOPPING FOR META-LEARNING BY ANALYZING THE NEURAL ACTIVATION DYNAMICS OF A FEW TARGET INPUT EXAMPLES Here we present experimental results on the performance of our early-stopping method. For each experiment, we only use the unlabelled input examples from the support set of a single target task to evaluate the neural activation dynamics. At the beginning of an experiment, we thus randomly sample a task Ti from p(Ttarget) and only keep its set of support input examples. We repeat the experiment for multiple (50) independently and identically distributed support sets from p(Ttarget), and take the average performance. Each such experiment is then repeated for 5 independent training runs. As a baseline for comparison, we use the validation early-stopping approach. Since ψ1 to ψ4 work in practice, we will use them as our function space Ψ but the method that we develop applies as well to the continuous function space defined above, and we present some experimental results in App.B.5 where we apply our early-stopping method with the continuous function space. We begin by demonstrating our proposed early-stopping method in few-shot transfer learning, across various target dataset, and present the results in Tab.1. We use the standard 4-layer CNN architecture, with MAML, trained on MiniImagenet 5-way 1-shot. When the target dataset is Omniglot, the performance of the validation baseline (51%) is significantly lower than the optimal generalization (76%) presumably because of the distributional shift between MiniImagenet and Omniglot. In such scenario our method appears to offer a significant advantage over the baseline, since we obtain 75% in target accuracy, quite close to the optimal generalization. In scenarios where the target domain is arguably more similar to that of the source domain, e.g. transfer from MiniImagenet to Imagenet, the early-stopping from the validation accuracy yields a performance (35.0%) closer to the optimal generalization (35.6%), and in such case our method performs only slightly worse (34.8%) than the validation baseline. We observe a similar trend when the model is trained on the Quickdraw dataset : When transferring to Omniglot, the validation baseline leads to sub-optimal generalization, but estimating the target accuracy from the neural activation dynamics allows us to halt the training close to the optimal time. When transferring to Traffic Sign, the baseline performance yields reasonable performance, and our method is roughly on par with it. From this point, we will focus on settings where there is a significant gap in performance between the validation baseline and optimal generalization, for example the transfer from Birds to Quickdraw, the present in our illustration of Sec.2.1. Next we present similar experiments with two other meta-learning algorithms : Prototypical Networks and Matching Networks, which are shown in Tab.2. 6 CONCLUSION In this work we have presented empirical evidence that the overfitting point of Meta-Learning for deep neural networks for few-shot classification can often be estimated from simple statistics of neural activations and how they evolve throughout meta-training time. Our results suggest that key properties, or statistics of how feature extractors respond to the target input distribution can be found which are simple enough to be estimated from just a few unlabelled target input examples. However, the specific function of the activations, and the layer at which to measure them, need to be inferred. We demonstrate that these functions and layers of interest can be inferred and used to guide early stopping – leading to a new, and effective method for early stopping which represents a significant departure for the de facto standard practice of using a validation set. In few-shot learning these ingredients can be inferred from how the neural activation dynamics of the validation data relate to the validation accuracy. In few-shot transfer learning, they are inferred through searching for which function (in a given function space) and at which layer, that the activation dynamics of the target input domain “diverge” the most from those of the source domain. Finally, we have demonstrated how this approach can be used to optimize for target generalization in practice to perform early-stopping and thus improve overall generalization to distributions of novel few-shot classification tasks, while only using unlabelled support examples from a single target task. A EXPERIMENTAL DETAILS CNN : We use the architecture proposed by Vinyals et al. (2016) which is used by Finn et al. (2017), consisting of 4 modules stacked on each other, each being composed of 64 filters of of 3 × 3 convolution, followed by a batch normalization layer, a ReLU activation layer, and a 2 × 2 max-pooling layer. With Omniglot, strided convolution is used instead of max-pooling, and images are downsampled to 28 × 28. With MiniImagenet, we used fewer filters to reduce overfitting, but used 48 while MAML used 32. As a loss function to minimize, we use cross-entropy between the predicted classes and the target classes. ResNet-18 : We use the same implementation of the Residual Network as in (Triantafillou et al., 2020). For most of the hyperparameters, we follow the setup of (Triantafillou et al., 2020), but we set the main few-shot learning hyperparameters so as to follow the original MAML setting more closely, and in each setting, we consider a single target dataset at a time, with a fixed number of shots and classification ways. We use 5 steps of gradient descent for the task adaptations, 15 shots of query examples to evaluate the test accuracy of tasks. We don’t use any learning rate decay during meta-training, and step-size of 0.01 when finetuning the models to new tasks. Datasets : We use the MiniImagenet and Omniglot datasets, as well as the many datasets included in the Meta-Dataset benchmark (Triantafillou et al., 2020). B COMPLETE EXPERIMENTAL RESULTS B.1 THE ISSUE OF USING A VALIDATION SET FOR EARLY-STOPPING IN META-LEARNING B.2 THE RELATION BETWEEN THE NEURAL ACTIVATION DYNAMICS AND GENERALIZATION TO NOVEL TASKS B.2.1 RELATION BETWEEN THE REPRESENTATION SPACE OF THE FEATURE EXTRACTOR AND TARGET GENERALIZATION Here we present experimental results to support the observation 1 that we make in 2.2, showing that the variation of generalization along meta-training time can be captured by a function of the neural activation dynamics that is independent of class labels. B.2.2 NEURAL ACTIVATION DYNAMICS : DIFFERENT LEVELS OF THE FEATURE EXTRACTOR CAN REVEAL THE VARIATION OF GENERALIZATION B.2.3 DIFFERENT FUNCTIONS OF THE NEURAL ACTIVATION DYNAMICS CAN REVEAL THE VARIATION OF GENERALIZATION By expending the experimental setup further, we observed instances where a given metric had strong correlation with generalization but in a negative sense, i.e. that it was actually its argmin that coincided with optimal early-stopping time t∗. See Fig. 14 for examples of this phenomenon. We later observed that other statistical estimators can correlate with generalization. 0 10 20 30 40 50 t (iters) 0.54 0.56 0.58 0.60 0.62 0.64 0.66 Ac cu ra cy expected inner product expected square L2 norm expected square L2 dispersion Acctarget (a) Exclusive correlation between a specific metric and generalization 0 25000 50000 75000 100000 125000 150000 175000 t (iters) 0.30 0.35 0.40 0.45 0.50 0.55 Ac cu ra cy Acctarget(t) (t) (b) Expected l2 Norm 0 20000 40000 60000 80000 100000 120000 140000 t (iters) 0.52 0.54 0.56 0.58 0.60 0.62 0.64 0.66 Ac cu ra cy Acctarget(t) (t) (c) Expected l2 Dispersion 0 20000 40000 60000 80000 t (iters) 0.0 0.2 0.4 0.6 0.8 1.0 Ac cu ra cy Acctarget(t) (t) (d) Expected Feature-Wise Variance Figure 15: Different metrics of the representation space may have strong correlation with generalization, other than the expected inner product of Eq. 2). a) Prototypical, VGG Flower, 5-way 1-shot : out of three metrics which in other cases may be related with generalization (as in b), c), d) and Sec. ??), here only the expected l2 dispersion has a strong relation with generalization. b) Expected l2 norm (Eq. 6); c) Expected square l2 dispersion (Eq. 7, Prototypical Network, VGG Flower; d) Expected feature-wise variance (Eq. 8), Prototypical Network, Omniglot to Quickdraw. These results motivate our approach of considering a family of functions Ψ in which we must find the optimal function ψ∗ given the setting, rather than trying to discover a single universal metric that would correlate to generalization in all scenarios. Even if such metric exists, it may not be estimated with enough efficiency to satisfy the requirement of using only a single support set to estimate t∗. B.2.4 FUNCTIONS OF THE NEURAL ACTIVATION DYNAMICS : TASK-WISE VARIANCE OF THE ESTIMATE t∗ψ Here we present empirical results on the task-wise variance as discussed in the observation 4 of Sec. 2.2. We begin by showing the task-wise variance for few-shot accuracy when evaluated with a single target task and assuming access to the query examples (15 shots). Few-shot accuracy exhibits a high variance as different tasks will peak at much different times, making it unfit to estimate t∗. On the other hand, for the metrics from Sec. 2.2, which are based on small order statistics (mean and variance) the estimated early-stopping time exhibits drastically lower variance. See Fig. 16 for an example, where we use MAML in few-shot learning (5-way 1-shot) with the Aircraft dataset, and where we use the expected square l2 norm for the metric. As we can see, in Fig. 16, measuring the metric on different tasks merely offsets the response curve but bears almost no change on the trend of the curve itself. This also relates to our assumption that the variation of target generalization in Meta-Learning might be linked to a function of the neural activation dynamics that is class agnostic. B.3 CAPACITY OF THE CONTINUOUS FUNCTION SPACE Ψ (DEFINED BY THE THREE MOMENTS OF EQ.9) TO CONTAIN GOOD SOLUTIONS ψ∗ The momentsm1,m2 andm3 of Eq.9 define the parametric function space Ψ = {ψ(ϕl(X);w) | w ∈ R3, l ∈ [1..L]} where ψ(ϕl(X);w) = w1m1 + w2m2 + w3m3 and w = [w1, w2, w3] ∈ R3. This parametric function space Ψ. Here we have experimentally observed that Ψ has enough complexity to contain a good solution function ψ∗, for different meta-learning settings, both in few-shot learning and few-shot transfer learning, as shown in Tab.3. B.4 LEARNING ψ∗ IN FEW-SHOT TRANSFER LEARNING B.4.1 FINDING w∗ AND l∗ WHERE ψtarget(t) AND ψvalid(t) “DIVERGE” THE MOST As we added more experimental settings for Few-Shot Transfer Learning, we observed instances where, for a given metric measured at the representation space ϕL, there was no strong link with generalization, but when measuring the metric at lower hidden-layers than ϕL, then we observed a strong correlation with generalization. We illustrate this in Fig. 17. Theses results motivated our approach of considering the whole neural activation dynamics (all layers), rather than only those the final layer of the feature extractor alone, in our search for functions linked to generalization. Then in Fig.18 we conducted a more systematic analysis, but concerned with identifying the right functions ψ∗, with results suggesting that functions ψ showing stronger “divergence” (negative correlation) between the target and validation dynamics, will more likely lead to a higher target accuracy if we stop at their peak time. (a) Transfer : Omniglot to MiniImagenet, MAML. The critical depth, i.e. the one where measuring the expected inner product (here marked as RIP) predicts generalization on the target domain, is at layer 1, even though the critical depth for the source domain was at layer 4. (b) Transfer : MiniImagenet to Omniglot, MAML. The critical depth for the target domain is at layer 4, the same as for the source domain. Figure 17: Generalization can correlate to a metric at different levels of neural activations. Here the critical layer l∗ (squared in red) is identified by searching for the highest divergence between the validation and target neural activation dynamics. Correlation between D(ψtarget, ψtarget, ) and Generalization MAML Quickdraw ↓ Omniglot Prototypical Network Omniglot ↓ Quickdraw Matching Network MiniImagenet ↓ Omniglot 0.82 0.75 0.81 Table 4: Correlation betweenD(ψtarget, ψtarget, ) and Generalization, for different few-shot learning settings. The correlation is computed as in the analysis of Fig. 18c. The results show that functions exhibiting high divergence between the validation and target neural activation dynamics are likely to lead to good generalization performance on the target distribution. B.5 EVALUATING THE PERFORMANCE OF OUR EARLY-STOPPING METHOD WHEN USING THE CONTINUOUS FUNCTION SPACE Ψ DEFINED BY THE THREE MOMENTS OF EQ.9 Here we present a few experimental results where we apply our early-stopping methof in the continuous function space Ψ. Since there are only three weights to tune, namely w1, w2 and w3, we don’t suffer from the curse of dimensionality, which is the classic motivation for using gradient-based optimization of neural networks with many parameters. This allows for a search based optimization of w. 0.00 0.05 0.10 0.15 0.20 0.25 D( target, valid) 0.74 0.76 0.78 0.80 0.82 0.84 0.86 Pe rfo rm an ce Corr. coeff. R= 0.422 P-Value= 0.0 (a) Functions with high divergence between valid and target dynamics are more likely to achieve higher generalization 0.00 0.05 0.10 0.15 0.20 0.25 Avg Divergence (10 bins) 0.76 0.78 0.80 0.82 0.84 0.86 Av g Pe rfo rm an ce (1 0 bi ns ) (b) Average divergence vs. average performance 0.00 0.05 0.10 0.15 0.20 0.25 Avg Divergence (10 bins) 0.80 0.82 0.84 0.86 Av g Pe rfo rm an ce (1 0 bi ns ) Corr. coeff. R= 0.8255 P-Value= 0.0061 (c) Strong correlation between average divergence and average performance 0 5 10 15 20 25 30 35 t (iter) 0.75 0.80 0.85 0.90 0.95 1.00 Ac cu ra cy * target * valid Acctarget (d) Solution function measured on the valid and target examples Table 6: Performance of our method - Few-Shot Transfer Learning, Prototypical Network Algorithm Matching Network Source dataset MiniImagenet Target dataset Omniglot Baseline 73.75% Our method 75% Table 7: Performance of our method - Few-Shot Transfer Learning, Matching Network
1. What is the focus of the paper regarding meta-learning methods? 2. What are the strengths and weaknesses of the proposed approach in addressing the identified problem? 3. How does the reviewer assess the clarity and organization of the paper's content, particularly in Section 2.2? 4. What kind of evidence does the reviewer require to support the efficacy of the proposed method? 5. Are there any concerns or suggestions regarding the presentation of the paper, such as figures and minor comments?
Summary Of The Paper Review
Summary Of The Paper Meta-learning methods typically apply early stopping by performing adaptation until validation accuracy starts to decrease. This paper claims that this practice is suboptimal because of the distribution shift inherent in testing on previously unseen tasks. They thus propose learning a different criterion: a weighted sum of three statistics of neural network activations. Review Strengths The problem of predicting an optimal stopping time from activation dynamics is interesting and is likely to improve meta-learning. Weaknesses Section 2.2 is very dense for the core information it attempts to convey. Furthermore, the observations are presented in a way that makes it seem like they contradict each other: observation 1 claims that ψ 1 is predictive of target accuracy, but observation two immediately says that ψ 1 is only predictive in some layers. Observation 3 says that even the statistic ψ 1 is not always predictive and one needs to consider other statistics. These "observations" are presented confusingly. I think most of section 2.2 can be summarized into "Together, the moments m 1 , m 2 , m 3 can form a good predictor of target accuracy, potentially because their span includes the metrics ψ 1 , ψ 2 , ψ 3 , ψ 4 ". The proposed method is entirely based on empirical observations, and the evidence for the efficacy of the method is also experimental. Yet, the only experiments presented are Tables 1 and 2, which only compare against the baseline of using a validation set and the optimal choice. To provide convincing evidence, I think standard deviations for reported results and ablations for the design choices ( ψ i only, etc.) are necessary. Figure 1 is hard to understand for many reasons. p(x_target) appears twice, and it is not immediately clear that the two orange areas correspond to few-shot and few-shot transfer learning. At first glance, the "training classes" images seem to only explain the scatterplot under it, etc. Minor comments "App.***" appears several times in the text. Is this a typo? Large empty space on page 11. Section B.2.2 empty
ICLR
Title Diagnosing the Environment Bias in Vision-and-Language Navigation Abstract Vision-and-Language Navigation (VLN) requires an agent to follow naturallanguage instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are extremely useful in navigating new environments that the agent does not know about previously. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of the major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons for this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations which contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gap between seen and unseen on multiple datasets (i.e., 8.6% to 0.2% on R2R, 23.9% to 0.1% on R4R, and 3.74 to 0.17 on CVDN) and achieve competitive unseen results to previous state-of-the-art models. 1 INTRODUCTION Vision-and-Language Navigation (VLN) tests an agent’s ability to follow complex natural language instructions as well as explore the given environments, so as to be able to reach the desired target locations. As shown in Fig. 1, the agent is put in an environment and given a detailed step-by-step navigational instruction. With these inputs, the agent needs to navigate the environment and find the correct path to the target location. In this work, we focus on the instruction-guided navigation (MacMahon et al., 2006; Anderson et al., 2018b; Misra et al., 2018; Blukis et al., 2018; Chen et al., 2019c) where detailed step-by-step navigational instructions are used (e.g., ‘Go outside the dining room and turn left ...’), in contrast to the target-oriented navigation (Gordon et al., 2018; Das et al., 2018; Mirowski et al., 2018; Yu et al., 2019) where only the target is referred (e.g., ‘Go to the kitchen’ or ‘Tell me the color of the bedroom’). Although these step-by-step instructions are overdetailed when navigating local areas (e.g., your home), they are actively used in unseen environments (e.g., your friend’s house, a new city) where the desired target is usually unknown to navigational agents. For this purpose, testing on unseen environments which are not used during agent-training is important and widely accepted by instruction-guided navigation datasets. Recent works propose different methods to improve generalizability of agents on these unseen testing environments; and most of the existing works (Anderson et al., 2018b; Wang et al., 2018b; Fried et al., 2018; Wang et al., 2019b; Ma et al., 2019a;b; Tan et al., 2019; Huang et al., 2019; Hu et al., 2019) observe a significant performance drop from seen environments (i.e., the environments used in training) to unseen environments (i.e., the environments not used in training), which indicates a strong bias in the model towards the training environments. While this performance gap is emphasized as one of the major challenges in current VLN research, the issue is still left unresolved and waits for an explicit explanation. Thus, in this paper, we aim to answer three questions to this environment bias: 1. Where (i.e., in which component) is the bias located? 2. Why does this bias exist? 3. How to eliminate this bias? To locate where the bias is, we start by showing that natural-language navigational instructions and underlying navigational graphs are not direct reasons for this performance gap. We then investigate the effect of environments on the agent’s performance. In order to conduct a detailed analysis, we resplit the environment and categorize the validation data into three sets based on their visibility to the training set: path-seen data intersecting with the training paths, path-unseen data using the training environments but away from the training paths, and env-unseen data using unseen environments (environments not used in training). By showing that the results gradually decrease from path-seen data to env-unseen data, we characterize the environment bias at three levels: path level, region level, and environment level. These three levels of environment biases indicate strong ‘spatial localities’ in the tasks of VLN, which are intuitively reasonable because environments and regions (e.g., houses and cities) usually have their own styles when built or decorated. We next want to analyze the detailed reason why this locality would further lead to a gap in seen versus unseen results. Our hypothesis is that the low-level information carried by the ResNet features (He et al., 2016) is the reason. To keep minimal low-level visual information and promote more high-level semantic information, we replace the ResNet features with the 1000 ImageNet classification probabilities. Although the semantic information encoded by these features is not accurate because of the shifted domain of images and labels, the same model with ImageNet-Labels features performs surprisingly well on various VLN datasets (i.e., Room-to-Room, R4R, and CVDN1). Most importantly, these noisy semantic features effectively eliminate the performance gap between seen and unseen environments, which suggests that the environment bias is attributed to the ResNet features as our hypothesis. Following the practice in using ImageNet labels as semantic features, we further provide a discussion on how the environment bias could be eliminated. For this, we employ advanced high-level semantic features which are more rational for the VLN domain. We explore three kinds of semantic features: (1) areas of detected object labels (Ren et al., 2015); (2) ground truth semantic views (Chang et al., 2017); and (3) learned semantic view features. We show that all of these semantic features significantly reduce the environment bias in multiple datasets and also achieve strong results in testing unseen environments. We hope this work encourages more investigation and research into improving the generalization of vision-language models to unseen real-world scenarios. 2 RELATED WORK Vision-and-Language Navigation: Vision-and-language navigation is an emerging task in the vision-and-language area. A lot of datasets have been proposed in recent years, such as Roomto-Room (Anderson et al., 2018b), Room-for-Room (Jain et al., 2019), TouchDown (Chen et al., 2019c), CVDN (Thomason et al., 2019b), RERERE (Qi et al., 2019), House3D (Wu et al., 2018) and EQA (Das et al., 2018). Recent works (Thomason et al., 2019a; Wang et al., 2018b; Fried et al., 1We did not test these semantic features on touchdown (Chen et al., 2019c) since the images are not released. 2018; Wang et al., 2019b; Ma et al., 2019a;b; Tan et al., 2019; Hu et al., 2019; Ke et al., 2019; Anderson et al., 2019) focusing on improving the performance of navigation models, especially in unseen testing environments, have helped to increase the navigational success rate. Domain Adaptation: The general setup of domain adaption contains two sets of data samples {xi}xi∈X and {yi}yi∈Y from two domains X and Y . Based on these samples, we could learn domain invariant feature with adversarial training (Goodfellow et al., 2014; Zhu et al., 2017; Long et al., 2018; Wang et al., 2019a; Hosseini-Asl et al., 2019; Zhang et al., 2019; Gong et al., 2019; Chen et al., 2019b) or learn a transfer function f : X → Y (Wang et al., 2018a; Chen et al., 2019a; Rozantsev et al., 2018). However, samples from the target domain may not be available (e.g., the testing environments in navigation should not be used in training) in applications. Thus, we try to give an interpretable explanation to why performance varies in different domains and design a robust feature for it without deliberately considering the target domain. Two methods in VLN, RCM (Wang et al., 2019b) and EnvDrop (Tan et al., 2019), explore the possibility of domain adaptation. Both works take the testing environments in training while RCM also uses testing instructions. Domain Generalization: In domain generalization (Blanchard et al., 2011), the goal is to predict the labels in the previous unseen domain. Similar to the test setting of VLN tasks, the testing data is unrevealed in training. Works have been proposed to learn the common features of the training domain (Muandet et al., 2013; Blanchard et al., 2017; Li et al., 2017; 2018; Carlucci et al., 2019; Deshmukh et al., 2019). In this paper, we focus on the domain generalization problem in VLN task, and try to find the reasons for the failures. 3 VISION-AND-LANGUAGE NAVIGATION AND ITS ENVIRONMENT BIAS We first introduce the task of vision-and-language navigation (VLN) and briefly describe the neural agent models used in our work. We next survey previous works on multiple indoor navigation datasets to show that the environment bias is widely observed in current VLN research. Lastly, we claim that this bias also exists in the outdoor navigation tasks, if the agent is tested on unseen regions. 3.1 VISION-AND-LANGUAGE NAVIGATION Tasks: As shown in Fig. 1, the goal of the VLN task is to train an agent to navigate a certain type of environments {E} (e.g., indoor or outdoor environments) given the instruction I. Each environment E is an independent space, such as a room or a house, and consists of a set of viewpoints. Each viewpoint is represented as a panoramic image and can be decomposed into separate views {o} as inputs to the neural agent models. The viewpoints and their connectivity form the navigational graph. In practice, after being placed at a particular viewpoint and given the instruction in the beginning, at each time step, the agent can observe the panoramic image of the viewpoint where it is located, and choose to move along an edge of the graph to the next node (i.e., viewpoint) or stop. This navigational process produces a path (i.e., a list of viewpoints), and the performance of the agent is evaluated by whether it reaches the target location that the instruction indicates in the end. Neural Agent Models: Most instruction-guided navigational agents are built based on attentive encoder-decoder models (Bahdanau et al., 2015). The encoder reads the instructions while the decoder outputs actions based on the encoded instructions and perceived environments. Since the main purpose of this work is to understand the environment bias in vision-and-language navigation, we use a minimal representative neural agent model that achieves comparable results to previous works. Specifically, we adopt the panoramic-view neural agent model in Fried et al. (2018) (‘Follower’) with modifications from Tan et al. (2019) as our baseline model. We also exclude advanced training techniques (i.e., reinforcement learning and data augmentation) and only train the agent with imitation learning in all our experiments for the same purpose. More details in original papers. 3.2 ENVIRONMENT BIAS IN INDOOR NAVIGATION In order to evaluate the generalizability of agent models, indoor vision-and-language navigation datasets (e.g., those collected from Matterport3D (Chang et al., 2017)) use disjoint sets of environments in training and testing. Most of the datasets provide two validation splits to verify the agent’s performance in both sets of environments: validation seen, which takes the data from training environments, and validation unseen, whose data is from new environments apart from the training environments. In the first part of Table 1, we list most of the previous works on the Room-to-Room dataset (Anderson et al., 2018b) and report the success rate under greedy decoding (i.e., without beam-search) on validation seen and validation unseen splits. The large absolute gaps (from 30.9% to 9.7%) between the results of seen and unseen environments show that current neural agent models on R2R suffer from environment bias2. Besides Room-to-Room (R2R), we also analyze two newly-released indoor navigation datasets that were also collected from Matterport3D environments: Room-forRoom (R4R) (Jain et al., 2019) and Cooperative Vision-and-Dialog Navigation (CVDN) (Thomason et al., 2019b). As shown in the second and third parts of Table. 1, results drop significantly from seen to unseen environments (i.e., 26.9% on R4R and 3.74 on CVDN), indicating that agent models also suffer from the environment bias in these datasets. Lastly, we show the results (denoted as ‘ours’ in Table. 1) when the environment bias (reason analyzed in Sec. 5) is effectively eliminated by our learned semantic features (described in Sec. 6.3). As a result, the performance gaps are effectively decreased on all three datasets without changing the model and learning hyper-parameters, compared to our baselines (denoted as ‘Our baseline’) and previous works 3. 3.3 ENVIRONMENT BIAS IN OUTDOOR NAVIGATION Since the three indoor navigational datasets in previous sections are collected from the Matterport3D environments (Chang et al., 2017), in order to show that the environment bias is a general phe- 2Our work’s aim is to both close the seen-unseen gap while also achieving competitive unseen results. Note that Anderson et al. (2019) also achieve 0% gap but at the trade-off of low unseen results. There is also another recent work by Ke et al. (2019) but they do not report val-seen results from non-beam-search methods. 3As for another major evaluation metric on the R4R dataset, Coverage weighted by Length Score (CLS), we also observe a similar phenomenon in performance gap; and our methods can also eliminate this gap from 19.2 to 1.5 and achieve competitive state-of-the-art unseen CLS results (34.7). nomenon also existing in other kinds of environments, we investigate the outdoor navigation task from Touchdown dataset (Chen et al., 2019c), whose environments are taken from New York City. In the original data splits of Touchdown, the environment is not specifically divided into seen and unseen and only involved one city. Thus the trained agent is only tested on the training environments (similar to validation seen split). To reveal the environment bias in Touchdown dataset, we split the city environment according to latitude and create two sub-environments: ‘training’ and ‘unseen’. The data are then re-split into training, val-seen, and val-unseen, accordingly. We adapt our baseline R2R agent model with additional convolutional layers to fit this new task. As shown in the last part of Table. 1, when experimenting on the original data split, our baseline model achieves state-of-theart results on the original ‘dev’ set and ‘test’ set, proving the validity of our model in this dataset. However, the results on our re-split data (denoted as ‘Our baseline (seen/unseen split)’) still show a big drop from the ’training’ to the ’unseen’ sub-environment (from 17.5% to 5.3%), indicating that environment bias is a broad issue. 4 WHERE: THE EFFECT OF DIFFERENT TASK COMPONENTS In Sec. 3, we showed that current neural agent models are biased towards the training environments on multiple vision-and-language navigation (VLN) datasets. In this section, our goal is to locate the component of VLN tasks which this environment bias is attributed to. As one of the early-released and well-explored datasets of VLN, Room-to-Room (R2R) dataset (Anderson et al., 2018b) is used as the diagnosing dataset in the experiments. We start by showing that two possible candidates, the natural language instructions and the underlying navigational graph, do not directly contribute to the environment bias. Then the effect of visual environments is analyzed in detail. 4.1 THE EFFECT OF NATURAL-LANGUAGE NAVIGATIONAL INSTRUCTIONS A common hypothesis is that the navigational instructions for unseen environments (e.g., val unseen) are much different from the training environments (i.e., training and val seen) due to the different objects and layouts in new environments; and this lingual difference thus leads to the performance gap. In this section, we analyze the distributions of success rate with regard to the relationship between validation data’s instructions and training instructions. In order to quantitatively evaluate this relationship, we define the ‘distances’ from a validating instruction to all training instructions as the phrase-matching metric. Suppose x is a validating datum, T is the training set, and inst(x) is the instruction of the datum x, we use ROUGE-L (Lin, 2004) and BLEU-4 (Papineni et al., 2002) to Train & Val Seen Re-splitting Train & Val Path-seen Val Path-unseen X-splitting line Figure 3: Graph split: left is original data and right is re-splitting data. Black vertices are viewpoints visited during training; red paths are val seen / val path-seen; blue paths are val path-unseen. calculate this ‘distance’: disROUGE(x,T) = min t∈T ROUGE-L (inst(x), inst(t)) (1) disBLEU(x,T) = BLEU-4 ( inst(x), {inst(t)}t∈T ) (2) where we consider all the training instructions as references in calculating the BLEU-4 score. We show the distributions of success rates and distances in Fig. 2. As opposed to the hypothesis, we do not observe a significant difference between the distributions of ‘distances’ (as shown in Fig. 2 (a, b)) on seen validation and unseen validation. For the success rate distributions (in Fig. 2(c,d)), the performance is better on instructions with smaller ‘distances’ (i.e., higher BLEU-4/ROUGE-L scores w.r.t. the training instructions) on both validation splits. However, comparing two splits, with the same ‘distance’ to training instructions, seen validation still significantly outperforms the unseen validation set on success rate, which implies the existence of other reasons rather than language attributed to this performance gap. 4.2 THE EFFECT OF UNDERLYING NAVIGATIONAL GRAPH As shown in Fig. 3, an environment could be considered as its underlying navigational graph with visual information (as in Fig. 1). In order to test whether the agent model could overfit to these navigational graphs (and thus be biased towards training environments), we follow the experiments in Hu et al. (2019) to train the agent without visual information. Specifically, we mask out the ResNet features with zero vectors thus the agent could only make the decision based on the instructions and the navigational graph. With our baseline model, the success rate is 38.5% on validation seen and 41.0% on validation unseen in this setting, which is consistent with the finding in Hu et al. (2019). Besides showing the relatively good performance of unseen split without visual contents (similar to Thomason et al. (2019a) and Hu et al. (2019)), we also want to emphasize the low performance gap between seen and unseen environments (2.5% compared to the > 10% gap in usual). Hence, we claim that the underlying graph is not a dominant reason for the environment bias. 4.3 THE EFFECT OF VISUAL ENVIRONMENTS To show how the visual environments affect the agent’s performance, we analyze the results on unseen environments and in different spatial regions of the training environments. In order to give a detailed characterization of the effect of environments, we are going to reveal the spatial localities which are related to the agent’s performance at three different levels: • Path-level Locality: Agents are better at paths which intersect with the training paths. • Region-level Locality: Agents are better in regions which are closer to the training data. • Environment-level Locality: Agents perform better on training environments than on un- seen environments. And the existence of these spatial locality inspires us to find the direct cause of the problem in Sec. 3.2. However, the original split of data is not fine-grained enough to separately reveal these spatial localities. To better illustrate this, we visualize the data from one environment of the Roomto-Room dataset in Fig. 3, where the vertices are viewpoints with visual information and edges are valid connections between viewpoints. The vertices highlighted with dark-black indicate the viewpoints which are used in training paths, and the red edges are the connections covered by original val-seen paths. As shown in Fig. 3, nearly all viewpoints in val-seen paths (vertices connected to red lines) are used as viewpoints in training data (vertices marked by dark-black). We thus cannot categorize the path-level and region-level localities. To bypass this, we propose a novel re-splitting method to create our diagnosis data splits. Structural Data Re-splitting We employ two kinds of structural data splitting methods based on the horizontal or vertical coordinates, denoted as ‘X-split’ and ‘Z-split’, respectively. The ‘Zsplit’ intuitively separates different floors in the houses and ‘X-split’ creates separate areas. When applying to the training environments in R2R dataset, we use one side of the splitting line (see the ‘X-splitting line’ Fig. 3) as the new training ‘environment’, and the other side as the path-unseen ‘environment’. In addition to this split of environments, we also re-split the original training data and val-seen data while keeping the val-unseen data the same. The data paths across the splitting line are dropped. As shown in the right part of Fig. 3, we create three new data splits: training split, val-path-seen split, and val-path-unseen split. The edges covered by the new val-path-unseen split are highlighted in blue, while the color style of training split and val-path-seen split (‘Black’ for viewpoints in training and ‘Red’ for edges in val path-seen) are the same. Since the amount of original val-seen data are inadequate to fill two new validation sets (val path-seen and val pathunseen), we bring some (original) training data into our new validation splits. The overall statistics of original splits and our new splits are shown in Table 2.4 Existence of Path-level and Environment-level Localities For both splitting methods, we train our baseline model on the newly-split training set and evaluate on our three validation sets (denoted as ‘X-split’ or ‘Z-split’ rows in Table 2). The results of our baseline model on the original R2R (denoted as ‘R2R’ rows) splits are listed for comparison. As shown in Table. 2, the agent performs better on val path-seen than val path-unseen, which suggests that a path-level locality exists in current VLN agent models. Meanwhile, the results on val path-unseen are further higher than val env-unseen and it indicates the environment-level locality which is independent of the path-level locality. Existence of the Region-level Locality To further demonstrate region-level locality, we study how the success rate changes in different regions of the environment with respect to their distances to the training data, which is similar to the analysis of language ‘distance’ in Sec. 4.1. We first calculate the point-by-point shortest paths using the Dijkstra’s algorithm (Dijkstra, 1959), where the shortest distances between viewpoints v and v′ are denoted as the graph distance disGRAPH(v, v′). Based on this graph distance, we define the viewpoint distance disVIEWPOINT from a viewpoint v to the training 4We only split the environments whose data contains substantial amount, thus make sure that the remaining training data is still adequate for training strong models. data T as v’s minimal graph distance to a viewpoint v′ in training data. We then define the path distance disPATH from a validating data x to the whole training data T as the maximal viewpoint distance in the path of x: disPATH(x,T) = max v∈path(x) disVIEWPOINT(v,T) (3) = max v∈path(x) minv′∈path(t) ∀t∈T disGRAPH (v, v ′) (4) We compute this path distance between paths in the env-seen validation set and training environments in our re-split data. As shown in Fig. 4, the success rate declines as the path moves further from the training environment on both re-splitting methods (i.e., ‘X-split’ and ‘Z-split’). As conclusion, the closer the path to the training data, the higher the agent performance is, which suggests the existence of region-level locality. 5 WHY: WHAT INSIDE THE ENVIRONMENTS CONTRIBUTES TO THE BIAS? In Sec. 4, we locate the cause of performance gap in visual environments by excluding other potential reasons and categorizing the spatial localities. However, there are still multiple possible aspects inside the environment which could lead to these spatial localities, e.g., the object layout convention and the room connections. The agent model could be biased towards the training environments by over-fitting or memorizing these environment-specific characteristics. In this section, we want to identify which aspect directly contributes to the bias and draw the following conclusion: the environment bias is attributed to low-level visual information carried by the ResNet features. We first show an experiment that effectively decreases the gap between seen and unseen environments with minimal model modifications. We then clarify our conclusions based on the findings. 5.1 AN INVESTIGATION EXPERIMENT: IMAGENET LABELS AS VISUAL FEATURES Suspecting that the over-fitting happens when the agent over-learns low-level features, we hope to find the replacement of ResNet 2048-features that contain minimal low-level information while preserving distinguishable visual contents. The most straightforward replacement is that instead of using mean-pooled features, we inject the frozen 1000-way classifying layer in ResNet pre-training, and use the probabilities of ImageNet labels as visual features. Shown as ‘ImageNet’ in Table. 3, the probability distribution almost closes the gap between seen and unseen. These results further constrain the reason of environment bias to the low-level ResNet features of image views. Combining with the findings of spatial localities, we suggest that environments (i.e., houses) and regions (i.e., rooms) usually have their own ‘style’. Thus the same semantic label (captured by ImageNet-1000 features) has different visual appearances (captured by ResNet features) in different environments or regions. As a result, ImageNet-1000 features, in spite of being noisy, are not distracted by low-level visual appearance and could generalize to unseen environments, while ResNet features could not. Although these ImageNet-1000 features decrease the performance gap, it has a disagreement with the VLN domain so that the validation unseen results of R4R and CVDN are slightly worse than baseline (and not much better for R2R). Hence it motivates us to find better semantic representations of environmental features that can both close the seen-unseen gap while also achieving state-of-theart on unseen results (which we discuss next). 6 HOW: METHODOLOGY TO FIX THE ENVIRONMENT BIAS In the previous section (Sec. 5), we found that the environment bias is related to the low-level visual features (i.e., 2048-dim ResNet features). Following the findings we observed in Sec. 5.1, we build our agent on the features which are more correlated to the VLN environmental semantics than the ImageNet label features in Sec. 5.1. We first demonstrate our baseline results on three VLN datasets and then explore the advanced semantic feature replacements. As shown in Table 3, these advanced semantic features could effectively reduce the performance gap between seen and unseen environments and improve the unseen results compared to our strong baselines. The effectiveness of these semantic features supports our explanation of the environment bias in Sec. 5 and also suggests that future work in VLN tasks should think about such generalization issues. 6.1 BASELINE In our baseline model, following the previous works we use the standard ResNet features as the representation of environments (Anderson et al., 2018b; Jain et al., 2019; Thomason et al., 2019b). These features come from the mean-pooled layer after the final convolutional layer of ResNet152 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). As shown in ‘Baseline’5 rows of Table. 3, val-seen results are significantly higher than val-unseen results in all three datasets. Note that our baseline method takes the ‘feature dropout’ technique demonstrated in Tan et al. (2019) (without back translation): the ResNet features are randomly masked by zero before used as inputs of the agent. Without this ‘feature dropout’ (denoted as ‘ResNet NoDrop’ in Table. 3), the gaps will increase in R2R and R4R, which suggests that this ‘feature dropout’ technique also helps to eliminate the low-level visual information over-fitting as we discussed in Sec. 5. However, the performance gap is still large, which leads us to the following discussions of semantic features. 6.2 DETECTED OBJECTS AREAS During navigation, the objects in the environments are crucial since their matchings with the instruction often indicate the locations that can guide the agent, thus object detection results of the environments can provide relevant semantic information. In our work, we utilize the detection information generated by Faster R-CNN (Ren et al., 2015) to create the feature representations. Comparing to 5We use our baseline agent model (in Sec. 3.1) for R2R and R4R. For CVDN, we take the official baseline code in https://github.com/mmurray/cvdn. ImageNet-1000 features (Sec. 5.1), these detection features include more environmental information since the viewing images in VLN usually contain multiple objects. Instead of directly using classification probabilities of the labels from ResNet and different from the approach in Hu et al. (2019) who utilized the embeddings of detected labels, we design our detection features f DETECT of each image view as the sum of the areas of detected objects weighted by detection confidence: f DETECT=[ac1 , ac2 , . . . , acn ]; aci= ∑ obj is ci Area(obj) · Conf(obj) (5) where the ci and aci are the label and feature of each detected object, Area(∗) and Conf(∗) are the area and confidence of each object. For implementation details, we use the Faster R-CNN (Ren et al., 2015) trained on Visual Genome (Krishna et al., 2017) provided in Bottom-Up Attention (Anderson et al., 2018a). To eliminate the labels irrelevant to VLN task, we calculate the total areas of each detection object among all environments and pick the labels that take up a relatively large proportion of the environments, creating features of dimension 152.6 Denoted as ‘Detection’ in Table 3, the performance gap is diminished with these detection features compared to baselines in all three datasets, indicating that changing the features to a higher semantic level has a positive effect on alleviating the environment bias. Meanwhile, the improvement of unseen validation results on R2R an R4R datasets suggests the better efficiency in the VLN task than the ImageNet labels. 6.3 SEMANTIC SEGMENTATION Although the detection features can provide adequate semantic information for the agent to achieve comparable results as the baseline model, they do not fully utilize the visual information where the content left over from detection may contain useful knowledge for navigation. A better semantic representation is the semantic segmentation, which segments each view image on the pixel level and gives the label to each segment region, allowing us to utilize the semantics from the entire environment. Matterport3D (Chang et al., 2017) dataset provides the labeled semantic segmentation information of every scene and we take the rendered images from Tan et al. (2019)7. A comparison example of RGB images and semantic views is available in the Appendix. Since the semantic segmentation images are fine-grained and blurry in boundaries, we follow the design of detection features, using the areas of semantic classes in each image view as the semantic features (confidence is excluded since semantic segmentation does not provide this value). The areas are normalized to [0, 1] by dividing the area of the whole image region. We first assume that the semantic information is provided as additional environmental information and the results of the model using the ground truth semantic areas are shown in the ‘ground truth’ rows in Table. 3. We next study the situation where the semantic information is not available in testing environments thus the information needs to be learned from training environments. Thus we train a separate multi-layer perceptron to predict the areas of these semantic classes (details in Appendix), and the results of the model with these predicted semantics as features are shown in ‘learned’. As shown in Table. 3, both ‘ground truth’ and ‘learned’ semantic representations bring the performance of seen and unseen closer comparing to the baseline model, and the smallest performance gaps come from learned semantic segmentation features in all three datasets. The highest validation unseen success rates among all the proposed feature representations are also produced by semantic segmentation features, ‘learned’ semantic for R4R and ‘ground truth’ semantic for R2R and CVDN. Overall, among all the semantic representations we have explored, the semantic segmentation features are most effective in eliminating the environment bias. 7 CONCLUSION In this paper, we focus on studying the performance gap between seen and unseen environments widely observed in vision-and-language navigation (VLN) tasks, trying to find where and why this environment bias exists and provide possible initial solutions. By designing the diagnosis experiments of environment re-splitting and feature replacement, we locate the environment bias to be in the low-level visual appearance; and we discuss semantic features that decrease the performance gap in three VLN datasets and achieve state-of-the-art results. 6Note that this detection feature dimension 152 coincidentally is the same as the number of layers in ResNet, but there is no correlation. 7The rendered semantic views are downloaded from https://github.com/airsplay/R2R-EnvDrop. A APPENDIX A.1 EXAMPLES OF RGB IMAGES AND SEMANTIC VIEWS In Fig. 5, we show a rendered semantic view from Tan et al. (2019) and its original RGB image. Different colors indicate different semantic segmentation areas and 40 semantic labels are considered in the Matterport3D dataset Chang et al. (2017). A.2 DETAILS OF ‘LEARNED’ SEMANTIC TRAINING We use a multi-layer perceptron over the ResNet features to generate the ‘learned’ semantic features. The multi-layer perceptron includes three fully-connected layers with ReLU activation on the outputs of the first two layers. The input is the 2048-dim ResNet feature f of each image view. The hidden sizes of the first two layers are 512 and 128. The final layer will output the 42-dim semantic feature y that represents the areas of each semantic class. After the linear layers, we use the sigmoid function σ to convert the output to the ratio of areas. x1 = ReLU(A1f + b1) (6) x2 = ReLU(A2x1 + b2) (7) y = σ(A3x2 + b3) (8) The model is trained with ground truth semantic areas yAREA (normalized to [0, 1]) and only the views in training environments are used in training. We minimize the binary cross-entropy loss between the ground truth areas {y∗i } and the predicted areas {yi}, where i indicate the i-th semantic class. L = − ∑ i (y∗i log yi + (1− y∗i ) log (1− yi)) (9) Dropout layers with a probability of 0.5 are added between fully-connected layers while training. The sigmoid function σ and the cross-entropy loss are combined to improve numerical stability. After the model is fitted, we freeze the weight and use it to predict the semantic features of all seen and unseen environments (i.e., environments for training, val-seen, and val-unseen data). The predicted features are then used as the input of our neural agent model for different datasets (i.e., R2R, R4R, and CVDN), and the neural agent models are the same except we change the input dimension from 2048 (the dimension of ResNet features) to 42 (the number of semantic classes).
1. What is the primary contribution of the paper regarding transfer error in vision&language navigation tasks? 2. What are the strengths of the paper, particularly in its thorough analysis of the ultimate cause of a recurring problem in this field? 3. Do you have any questions or concerns regarding the 'learned' features in section 6.3? 4. How does the reviewer assess the impact of the paper on future work in this area?
Review
Review This paper aims to identify the primary source of transfer error in vision&language navigation tasks in unseen environments. The authors tease apart the contributions of the out-of-distribution severity of language instructions, navigation graph (environmental structure), and visual features, and conclude that visual differences are the primary form in which unseen environments are out of distribution. They show that using ImageNet class scores as visual features results in significantly less transfer gap than using low-level visual features themselves. Experiments then show that semantic-level features dramatically reduce the transfer gap, although at a cost of absolute performance. I recommend this paper for acceptance; my decision is based on the thorough analysis of the ultimate cause of a recurring problem in this field. These results, if shown to hold across a significant number of datasets and tasks, would significantly change the focus of research in this field toward a focus on robust high-level visual representations (as opposed to e.g. better spatial awareness or better language understanding). This work represents an important step in this direction. The description of the 'learned' features in 6.3 could use more elaboration. Since it is the best performing approach by a large margin (as measured by transfer gap), it should probably get more than one sentence. In particular, what do the authors mean by "train a separate multi-layer perceptron to predict the areas of these semantic labels"? Does that mean the predicted pixel-level semantic segmentation map is used as input to the navigating agent? Or is it an auxiliary task for representation learning? etc. This should be clarified. I anticipate this paper to significantly influence future work in this area. -------- After discussing with the reviewers about the methodological issue of the validation set, I have lowered my score to a weak accept, but I think this paper should still be published.
ICLR
Title Diagnosing the Environment Bias in Vision-and-Language Navigation Abstract Vision-and-Language Navigation (VLN) requires an agent to follow naturallanguage instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are extremely useful in navigating new environments that the agent does not know about previously. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of the major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons for this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations which contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gap between seen and unseen on multiple datasets (i.e., 8.6% to 0.2% on R2R, 23.9% to 0.1% on R4R, and 3.74 to 0.17 on CVDN) and achieve competitive unseen results to previous state-of-the-art models. 1 INTRODUCTION Vision-and-Language Navigation (VLN) tests an agent’s ability to follow complex natural language instructions as well as explore the given environments, so as to be able to reach the desired target locations. As shown in Fig. 1, the agent is put in an environment and given a detailed step-by-step navigational instruction. With these inputs, the agent needs to navigate the environment and find the correct path to the target location. In this work, we focus on the instruction-guided navigation (MacMahon et al., 2006; Anderson et al., 2018b; Misra et al., 2018; Blukis et al., 2018; Chen et al., 2019c) where detailed step-by-step navigational instructions are used (e.g., ‘Go outside the dining room and turn left ...’), in contrast to the target-oriented navigation (Gordon et al., 2018; Das et al., 2018; Mirowski et al., 2018; Yu et al., 2019) where only the target is referred (e.g., ‘Go to the kitchen’ or ‘Tell me the color of the bedroom’). Although these step-by-step instructions are overdetailed when navigating local areas (e.g., your home), they are actively used in unseen environments (e.g., your friend’s house, a new city) where the desired target is usually unknown to navigational agents. For this purpose, testing on unseen environments which are not used during agent-training is important and widely accepted by instruction-guided navigation datasets. Recent works propose different methods to improve generalizability of agents on these unseen testing environments; and most of the existing works (Anderson et al., 2018b; Wang et al., 2018b; Fried et al., 2018; Wang et al., 2019b; Ma et al., 2019a;b; Tan et al., 2019; Huang et al., 2019; Hu et al., 2019) observe a significant performance drop from seen environments (i.e., the environments used in training) to unseen environments (i.e., the environments not used in training), which indicates a strong bias in the model towards the training environments. While this performance gap is emphasized as one of the major challenges in current VLN research, the issue is still left unresolved and waits for an explicit explanation. Thus, in this paper, we aim to answer three questions to this environment bias: 1. Where (i.e., in which component) is the bias located? 2. Why does this bias exist? 3. How to eliminate this bias? To locate where the bias is, we start by showing that natural-language navigational instructions and underlying navigational graphs are not direct reasons for this performance gap. We then investigate the effect of environments on the agent’s performance. In order to conduct a detailed analysis, we resplit the environment and categorize the validation data into three sets based on their visibility to the training set: path-seen data intersecting with the training paths, path-unseen data using the training environments but away from the training paths, and env-unseen data using unseen environments (environments not used in training). By showing that the results gradually decrease from path-seen data to env-unseen data, we characterize the environment bias at three levels: path level, region level, and environment level. These three levels of environment biases indicate strong ‘spatial localities’ in the tasks of VLN, which are intuitively reasonable because environments and regions (e.g., houses and cities) usually have their own styles when built or decorated. We next want to analyze the detailed reason why this locality would further lead to a gap in seen versus unseen results. Our hypothesis is that the low-level information carried by the ResNet features (He et al., 2016) is the reason. To keep minimal low-level visual information and promote more high-level semantic information, we replace the ResNet features with the 1000 ImageNet classification probabilities. Although the semantic information encoded by these features is not accurate because of the shifted domain of images and labels, the same model with ImageNet-Labels features performs surprisingly well on various VLN datasets (i.e., Room-to-Room, R4R, and CVDN1). Most importantly, these noisy semantic features effectively eliminate the performance gap between seen and unseen environments, which suggests that the environment bias is attributed to the ResNet features as our hypothesis. Following the practice in using ImageNet labels as semantic features, we further provide a discussion on how the environment bias could be eliminated. For this, we employ advanced high-level semantic features which are more rational for the VLN domain. We explore three kinds of semantic features: (1) areas of detected object labels (Ren et al., 2015); (2) ground truth semantic views (Chang et al., 2017); and (3) learned semantic view features. We show that all of these semantic features significantly reduce the environment bias in multiple datasets and also achieve strong results in testing unseen environments. We hope this work encourages more investigation and research into improving the generalization of vision-language models to unseen real-world scenarios. 2 RELATED WORK Vision-and-Language Navigation: Vision-and-language navigation is an emerging task in the vision-and-language area. A lot of datasets have been proposed in recent years, such as Roomto-Room (Anderson et al., 2018b), Room-for-Room (Jain et al., 2019), TouchDown (Chen et al., 2019c), CVDN (Thomason et al., 2019b), RERERE (Qi et al., 2019), House3D (Wu et al., 2018) and EQA (Das et al., 2018). Recent works (Thomason et al., 2019a; Wang et al., 2018b; Fried et al., 1We did not test these semantic features on touchdown (Chen et al., 2019c) since the images are not released. 2018; Wang et al., 2019b; Ma et al., 2019a;b; Tan et al., 2019; Hu et al., 2019; Ke et al., 2019; Anderson et al., 2019) focusing on improving the performance of navigation models, especially in unseen testing environments, have helped to increase the navigational success rate. Domain Adaptation: The general setup of domain adaption contains two sets of data samples {xi}xi∈X and {yi}yi∈Y from two domains X and Y . Based on these samples, we could learn domain invariant feature with adversarial training (Goodfellow et al., 2014; Zhu et al., 2017; Long et al., 2018; Wang et al., 2019a; Hosseini-Asl et al., 2019; Zhang et al., 2019; Gong et al., 2019; Chen et al., 2019b) or learn a transfer function f : X → Y (Wang et al., 2018a; Chen et al., 2019a; Rozantsev et al., 2018). However, samples from the target domain may not be available (e.g., the testing environments in navigation should not be used in training) in applications. Thus, we try to give an interpretable explanation to why performance varies in different domains and design a robust feature for it without deliberately considering the target domain. Two methods in VLN, RCM (Wang et al., 2019b) and EnvDrop (Tan et al., 2019), explore the possibility of domain adaptation. Both works take the testing environments in training while RCM also uses testing instructions. Domain Generalization: In domain generalization (Blanchard et al., 2011), the goal is to predict the labels in the previous unseen domain. Similar to the test setting of VLN tasks, the testing data is unrevealed in training. Works have been proposed to learn the common features of the training domain (Muandet et al., 2013; Blanchard et al., 2017; Li et al., 2017; 2018; Carlucci et al., 2019; Deshmukh et al., 2019). In this paper, we focus on the domain generalization problem in VLN task, and try to find the reasons for the failures. 3 VISION-AND-LANGUAGE NAVIGATION AND ITS ENVIRONMENT BIAS We first introduce the task of vision-and-language navigation (VLN) and briefly describe the neural agent models used in our work. We next survey previous works on multiple indoor navigation datasets to show that the environment bias is widely observed in current VLN research. Lastly, we claim that this bias also exists in the outdoor navigation tasks, if the agent is tested on unseen regions. 3.1 VISION-AND-LANGUAGE NAVIGATION Tasks: As shown in Fig. 1, the goal of the VLN task is to train an agent to navigate a certain type of environments {E} (e.g., indoor or outdoor environments) given the instruction I. Each environment E is an independent space, such as a room or a house, and consists of a set of viewpoints. Each viewpoint is represented as a panoramic image and can be decomposed into separate views {o} as inputs to the neural agent models. The viewpoints and their connectivity form the navigational graph. In practice, after being placed at a particular viewpoint and given the instruction in the beginning, at each time step, the agent can observe the panoramic image of the viewpoint where it is located, and choose to move along an edge of the graph to the next node (i.e., viewpoint) or stop. This navigational process produces a path (i.e., a list of viewpoints), and the performance of the agent is evaluated by whether it reaches the target location that the instruction indicates in the end. Neural Agent Models: Most instruction-guided navigational agents are built based on attentive encoder-decoder models (Bahdanau et al., 2015). The encoder reads the instructions while the decoder outputs actions based on the encoded instructions and perceived environments. Since the main purpose of this work is to understand the environment bias in vision-and-language navigation, we use a minimal representative neural agent model that achieves comparable results to previous works. Specifically, we adopt the panoramic-view neural agent model in Fried et al. (2018) (‘Follower’) with modifications from Tan et al. (2019) as our baseline model. We also exclude advanced training techniques (i.e., reinforcement learning and data augmentation) and only train the agent with imitation learning in all our experiments for the same purpose. More details in original papers. 3.2 ENVIRONMENT BIAS IN INDOOR NAVIGATION In order to evaluate the generalizability of agent models, indoor vision-and-language navigation datasets (e.g., those collected from Matterport3D (Chang et al., 2017)) use disjoint sets of environments in training and testing. Most of the datasets provide two validation splits to verify the agent’s performance in both sets of environments: validation seen, which takes the data from training environments, and validation unseen, whose data is from new environments apart from the training environments. In the first part of Table 1, we list most of the previous works on the Room-to-Room dataset (Anderson et al., 2018b) and report the success rate under greedy decoding (i.e., without beam-search) on validation seen and validation unseen splits. The large absolute gaps (from 30.9% to 9.7%) between the results of seen and unseen environments show that current neural agent models on R2R suffer from environment bias2. Besides Room-to-Room (R2R), we also analyze two newly-released indoor navigation datasets that were also collected from Matterport3D environments: Room-forRoom (R4R) (Jain et al., 2019) and Cooperative Vision-and-Dialog Navigation (CVDN) (Thomason et al., 2019b). As shown in the second and third parts of Table. 1, results drop significantly from seen to unseen environments (i.e., 26.9% on R4R and 3.74 on CVDN), indicating that agent models also suffer from the environment bias in these datasets. Lastly, we show the results (denoted as ‘ours’ in Table. 1) when the environment bias (reason analyzed in Sec. 5) is effectively eliminated by our learned semantic features (described in Sec. 6.3). As a result, the performance gaps are effectively decreased on all three datasets without changing the model and learning hyper-parameters, compared to our baselines (denoted as ‘Our baseline’) and previous works 3. 3.3 ENVIRONMENT BIAS IN OUTDOOR NAVIGATION Since the three indoor navigational datasets in previous sections are collected from the Matterport3D environments (Chang et al., 2017), in order to show that the environment bias is a general phe- 2Our work’s aim is to both close the seen-unseen gap while also achieving competitive unseen results. Note that Anderson et al. (2019) also achieve 0% gap but at the trade-off of low unseen results. There is also another recent work by Ke et al. (2019) but they do not report val-seen results from non-beam-search methods. 3As for another major evaluation metric on the R4R dataset, Coverage weighted by Length Score (CLS), we also observe a similar phenomenon in performance gap; and our methods can also eliminate this gap from 19.2 to 1.5 and achieve competitive state-of-the-art unseen CLS results (34.7). nomenon also existing in other kinds of environments, we investigate the outdoor navigation task from Touchdown dataset (Chen et al., 2019c), whose environments are taken from New York City. In the original data splits of Touchdown, the environment is not specifically divided into seen and unseen and only involved one city. Thus the trained agent is only tested on the training environments (similar to validation seen split). To reveal the environment bias in Touchdown dataset, we split the city environment according to latitude and create two sub-environments: ‘training’ and ‘unseen’. The data are then re-split into training, val-seen, and val-unseen, accordingly. We adapt our baseline R2R agent model with additional convolutional layers to fit this new task. As shown in the last part of Table. 1, when experimenting on the original data split, our baseline model achieves state-of-theart results on the original ‘dev’ set and ‘test’ set, proving the validity of our model in this dataset. However, the results on our re-split data (denoted as ‘Our baseline (seen/unseen split)’) still show a big drop from the ’training’ to the ’unseen’ sub-environment (from 17.5% to 5.3%), indicating that environment bias is a broad issue. 4 WHERE: THE EFFECT OF DIFFERENT TASK COMPONENTS In Sec. 3, we showed that current neural agent models are biased towards the training environments on multiple vision-and-language navigation (VLN) datasets. In this section, our goal is to locate the component of VLN tasks which this environment bias is attributed to. As one of the early-released and well-explored datasets of VLN, Room-to-Room (R2R) dataset (Anderson et al., 2018b) is used as the diagnosing dataset in the experiments. We start by showing that two possible candidates, the natural language instructions and the underlying navigational graph, do not directly contribute to the environment bias. Then the effect of visual environments is analyzed in detail. 4.1 THE EFFECT OF NATURAL-LANGUAGE NAVIGATIONAL INSTRUCTIONS A common hypothesis is that the navigational instructions for unseen environments (e.g., val unseen) are much different from the training environments (i.e., training and val seen) due to the different objects and layouts in new environments; and this lingual difference thus leads to the performance gap. In this section, we analyze the distributions of success rate with regard to the relationship between validation data’s instructions and training instructions. In order to quantitatively evaluate this relationship, we define the ‘distances’ from a validating instruction to all training instructions as the phrase-matching metric. Suppose x is a validating datum, T is the training set, and inst(x) is the instruction of the datum x, we use ROUGE-L (Lin, 2004) and BLEU-4 (Papineni et al., 2002) to Train & Val Seen Re-splitting Train & Val Path-seen Val Path-unseen X-splitting line Figure 3: Graph split: left is original data and right is re-splitting data. Black vertices are viewpoints visited during training; red paths are val seen / val path-seen; blue paths are val path-unseen. calculate this ‘distance’: disROUGE(x,T) = min t∈T ROUGE-L (inst(x), inst(t)) (1) disBLEU(x,T) = BLEU-4 ( inst(x), {inst(t)}t∈T ) (2) where we consider all the training instructions as references in calculating the BLEU-4 score. We show the distributions of success rates and distances in Fig. 2. As opposed to the hypothesis, we do not observe a significant difference between the distributions of ‘distances’ (as shown in Fig. 2 (a, b)) on seen validation and unseen validation. For the success rate distributions (in Fig. 2(c,d)), the performance is better on instructions with smaller ‘distances’ (i.e., higher BLEU-4/ROUGE-L scores w.r.t. the training instructions) on both validation splits. However, comparing two splits, with the same ‘distance’ to training instructions, seen validation still significantly outperforms the unseen validation set on success rate, which implies the existence of other reasons rather than language attributed to this performance gap. 4.2 THE EFFECT OF UNDERLYING NAVIGATIONAL GRAPH As shown in Fig. 3, an environment could be considered as its underlying navigational graph with visual information (as in Fig. 1). In order to test whether the agent model could overfit to these navigational graphs (and thus be biased towards training environments), we follow the experiments in Hu et al. (2019) to train the agent without visual information. Specifically, we mask out the ResNet features with zero vectors thus the agent could only make the decision based on the instructions and the navigational graph. With our baseline model, the success rate is 38.5% on validation seen and 41.0% on validation unseen in this setting, which is consistent with the finding in Hu et al. (2019). Besides showing the relatively good performance of unseen split without visual contents (similar to Thomason et al. (2019a) and Hu et al. (2019)), we also want to emphasize the low performance gap between seen and unseen environments (2.5% compared to the > 10% gap in usual). Hence, we claim that the underlying graph is not a dominant reason for the environment bias. 4.3 THE EFFECT OF VISUAL ENVIRONMENTS To show how the visual environments affect the agent’s performance, we analyze the results on unseen environments and in different spatial regions of the training environments. In order to give a detailed characterization of the effect of environments, we are going to reveal the spatial localities which are related to the agent’s performance at three different levels: • Path-level Locality: Agents are better at paths which intersect with the training paths. • Region-level Locality: Agents are better in regions which are closer to the training data. • Environment-level Locality: Agents perform better on training environments than on un- seen environments. And the existence of these spatial locality inspires us to find the direct cause of the problem in Sec. 3.2. However, the original split of data is not fine-grained enough to separately reveal these spatial localities. To better illustrate this, we visualize the data from one environment of the Roomto-Room dataset in Fig. 3, where the vertices are viewpoints with visual information and edges are valid connections between viewpoints. The vertices highlighted with dark-black indicate the viewpoints which are used in training paths, and the red edges are the connections covered by original val-seen paths. As shown in Fig. 3, nearly all viewpoints in val-seen paths (vertices connected to red lines) are used as viewpoints in training data (vertices marked by dark-black). We thus cannot categorize the path-level and region-level localities. To bypass this, we propose a novel re-splitting method to create our diagnosis data splits. Structural Data Re-splitting We employ two kinds of structural data splitting methods based on the horizontal or vertical coordinates, denoted as ‘X-split’ and ‘Z-split’, respectively. The ‘Zsplit’ intuitively separates different floors in the houses and ‘X-split’ creates separate areas. When applying to the training environments in R2R dataset, we use one side of the splitting line (see the ‘X-splitting line’ Fig. 3) as the new training ‘environment’, and the other side as the path-unseen ‘environment’. In addition to this split of environments, we also re-split the original training data and val-seen data while keeping the val-unseen data the same. The data paths across the splitting line are dropped. As shown in the right part of Fig. 3, we create three new data splits: training split, val-path-seen split, and val-path-unseen split. The edges covered by the new val-path-unseen split are highlighted in blue, while the color style of training split and val-path-seen split (‘Black’ for viewpoints in training and ‘Red’ for edges in val path-seen) are the same. Since the amount of original val-seen data are inadequate to fill two new validation sets (val path-seen and val pathunseen), we bring some (original) training data into our new validation splits. The overall statistics of original splits and our new splits are shown in Table 2.4 Existence of Path-level and Environment-level Localities For both splitting methods, we train our baseline model on the newly-split training set and evaluate on our three validation sets (denoted as ‘X-split’ or ‘Z-split’ rows in Table 2). The results of our baseline model on the original R2R (denoted as ‘R2R’ rows) splits are listed for comparison. As shown in Table. 2, the agent performs better on val path-seen than val path-unseen, which suggests that a path-level locality exists in current VLN agent models. Meanwhile, the results on val path-unseen are further higher than val env-unseen and it indicates the environment-level locality which is independent of the path-level locality. Existence of the Region-level Locality To further demonstrate region-level locality, we study how the success rate changes in different regions of the environment with respect to their distances to the training data, which is similar to the analysis of language ‘distance’ in Sec. 4.1. We first calculate the point-by-point shortest paths using the Dijkstra’s algorithm (Dijkstra, 1959), where the shortest distances between viewpoints v and v′ are denoted as the graph distance disGRAPH(v, v′). Based on this graph distance, we define the viewpoint distance disVIEWPOINT from a viewpoint v to the training 4We only split the environments whose data contains substantial amount, thus make sure that the remaining training data is still adequate for training strong models. data T as v’s minimal graph distance to a viewpoint v′ in training data. We then define the path distance disPATH from a validating data x to the whole training data T as the maximal viewpoint distance in the path of x: disPATH(x,T) = max v∈path(x) disVIEWPOINT(v,T) (3) = max v∈path(x) minv′∈path(t) ∀t∈T disGRAPH (v, v ′) (4) We compute this path distance between paths in the env-seen validation set and training environments in our re-split data. As shown in Fig. 4, the success rate declines as the path moves further from the training environment on both re-splitting methods (i.e., ‘X-split’ and ‘Z-split’). As conclusion, the closer the path to the training data, the higher the agent performance is, which suggests the existence of region-level locality. 5 WHY: WHAT INSIDE THE ENVIRONMENTS CONTRIBUTES TO THE BIAS? In Sec. 4, we locate the cause of performance gap in visual environments by excluding other potential reasons and categorizing the spatial localities. However, there are still multiple possible aspects inside the environment which could lead to these spatial localities, e.g., the object layout convention and the room connections. The agent model could be biased towards the training environments by over-fitting or memorizing these environment-specific characteristics. In this section, we want to identify which aspect directly contributes to the bias and draw the following conclusion: the environment bias is attributed to low-level visual information carried by the ResNet features. We first show an experiment that effectively decreases the gap between seen and unseen environments with minimal model modifications. We then clarify our conclusions based on the findings. 5.1 AN INVESTIGATION EXPERIMENT: IMAGENET LABELS AS VISUAL FEATURES Suspecting that the over-fitting happens when the agent over-learns low-level features, we hope to find the replacement of ResNet 2048-features that contain minimal low-level information while preserving distinguishable visual contents. The most straightforward replacement is that instead of using mean-pooled features, we inject the frozen 1000-way classifying layer in ResNet pre-training, and use the probabilities of ImageNet labels as visual features. Shown as ‘ImageNet’ in Table. 3, the probability distribution almost closes the gap between seen and unseen. These results further constrain the reason of environment bias to the low-level ResNet features of image views. Combining with the findings of spatial localities, we suggest that environments (i.e., houses) and regions (i.e., rooms) usually have their own ‘style’. Thus the same semantic label (captured by ImageNet-1000 features) has different visual appearances (captured by ResNet features) in different environments or regions. As a result, ImageNet-1000 features, in spite of being noisy, are not distracted by low-level visual appearance and could generalize to unseen environments, while ResNet features could not. Although these ImageNet-1000 features decrease the performance gap, it has a disagreement with the VLN domain so that the validation unseen results of R4R and CVDN are slightly worse than baseline (and not much better for R2R). Hence it motivates us to find better semantic representations of environmental features that can both close the seen-unseen gap while also achieving state-of-theart on unseen results (which we discuss next). 6 HOW: METHODOLOGY TO FIX THE ENVIRONMENT BIAS In the previous section (Sec. 5), we found that the environment bias is related to the low-level visual features (i.e., 2048-dim ResNet features). Following the findings we observed in Sec. 5.1, we build our agent on the features which are more correlated to the VLN environmental semantics than the ImageNet label features in Sec. 5.1. We first demonstrate our baseline results on three VLN datasets and then explore the advanced semantic feature replacements. As shown in Table 3, these advanced semantic features could effectively reduce the performance gap between seen and unseen environments and improve the unseen results compared to our strong baselines. The effectiveness of these semantic features supports our explanation of the environment bias in Sec. 5 and also suggests that future work in VLN tasks should think about such generalization issues. 6.1 BASELINE In our baseline model, following the previous works we use the standard ResNet features as the representation of environments (Anderson et al., 2018b; Jain et al., 2019; Thomason et al., 2019b). These features come from the mean-pooled layer after the final convolutional layer of ResNet152 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). As shown in ‘Baseline’5 rows of Table. 3, val-seen results are significantly higher than val-unseen results in all three datasets. Note that our baseline method takes the ‘feature dropout’ technique demonstrated in Tan et al. (2019) (without back translation): the ResNet features are randomly masked by zero before used as inputs of the agent. Without this ‘feature dropout’ (denoted as ‘ResNet NoDrop’ in Table. 3), the gaps will increase in R2R and R4R, which suggests that this ‘feature dropout’ technique also helps to eliminate the low-level visual information over-fitting as we discussed in Sec. 5. However, the performance gap is still large, which leads us to the following discussions of semantic features. 6.2 DETECTED OBJECTS AREAS During navigation, the objects in the environments are crucial since their matchings with the instruction often indicate the locations that can guide the agent, thus object detection results of the environments can provide relevant semantic information. In our work, we utilize the detection information generated by Faster R-CNN (Ren et al., 2015) to create the feature representations. Comparing to 5We use our baseline agent model (in Sec. 3.1) for R2R and R4R. For CVDN, we take the official baseline code in https://github.com/mmurray/cvdn. ImageNet-1000 features (Sec. 5.1), these detection features include more environmental information since the viewing images in VLN usually contain multiple objects. Instead of directly using classification probabilities of the labels from ResNet and different from the approach in Hu et al. (2019) who utilized the embeddings of detected labels, we design our detection features f DETECT of each image view as the sum of the areas of detected objects weighted by detection confidence: f DETECT=[ac1 , ac2 , . . . , acn ]; aci= ∑ obj is ci Area(obj) · Conf(obj) (5) where the ci and aci are the label and feature of each detected object, Area(∗) and Conf(∗) are the area and confidence of each object. For implementation details, we use the Faster R-CNN (Ren et al., 2015) trained on Visual Genome (Krishna et al., 2017) provided in Bottom-Up Attention (Anderson et al., 2018a). To eliminate the labels irrelevant to VLN task, we calculate the total areas of each detection object among all environments and pick the labels that take up a relatively large proportion of the environments, creating features of dimension 152.6 Denoted as ‘Detection’ in Table 3, the performance gap is diminished with these detection features compared to baselines in all three datasets, indicating that changing the features to a higher semantic level has a positive effect on alleviating the environment bias. Meanwhile, the improvement of unseen validation results on R2R an R4R datasets suggests the better efficiency in the VLN task than the ImageNet labels. 6.3 SEMANTIC SEGMENTATION Although the detection features can provide adequate semantic information for the agent to achieve comparable results as the baseline model, they do not fully utilize the visual information where the content left over from detection may contain useful knowledge for navigation. A better semantic representation is the semantic segmentation, which segments each view image on the pixel level and gives the label to each segment region, allowing us to utilize the semantics from the entire environment. Matterport3D (Chang et al., 2017) dataset provides the labeled semantic segmentation information of every scene and we take the rendered images from Tan et al. (2019)7. A comparison example of RGB images and semantic views is available in the Appendix. Since the semantic segmentation images are fine-grained and blurry in boundaries, we follow the design of detection features, using the areas of semantic classes in each image view as the semantic features (confidence is excluded since semantic segmentation does not provide this value). The areas are normalized to [0, 1] by dividing the area of the whole image region. We first assume that the semantic information is provided as additional environmental information and the results of the model using the ground truth semantic areas are shown in the ‘ground truth’ rows in Table. 3. We next study the situation where the semantic information is not available in testing environments thus the information needs to be learned from training environments. Thus we train a separate multi-layer perceptron to predict the areas of these semantic classes (details in Appendix), and the results of the model with these predicted semantics as features are shown in ‘learned’. As shown in Table. 3, both ‘ground truth’ and ‘learned’ semantic representations bring the performance of seen and unseen closer comparing to the baseline model, and the smallest performance gaps come from learned semantic segmentation features in all three datasets. The highest validation unseen success rates among all the proposed feature representations are also produced by semantic segmentation features, ‘learned’ semantic for R4R and ‘ground truth’ semantic for R2R and CVDN. Overall, among all the semantic representations we have explored, the semantic segmentation features are most effective in eliminating the environment bias. 7 CONCLUSION In this paper, we focus on studying the performance gap between seen and unseen environments widely observed in vision-and-language navigation (VLN) tasks, trying to find where and why this environment bias exists and provide possible initial solutions. By designing the diagnosis experiments of environment re-splitting and feature replacement, we locate the environment bias to be in the low-level visual appearance; and we discuss semantic features that decrease the performance gap in three VLN datasets and achieve state-of-the-art results. 6Note that this detection feature dimension 152 coincidentally is the same as the number of layers in ResNet, but there is no correlation. 7The rendered semantic views are downloaded from https://github.com/airsplay/R2R-EnvDrop. A APPENDIX A.1 EXAMPLES OF RGB IMAGES AND SEMANTIC VIEWS In Fig. 5, we show a rendered semantic view from Tan et al. (2019) and its original RGB image. Different colors indicate different semantic segmentation areas and 40 semantic labels are considered in the Matterport3D dataset Chang et al. (2017). A.2 DETAILS OF ‘LEARNED’ SEMANTIC TRAINING We use a multi-layer perceptron over the ResNet features to generate the ‘learned’ semantic features. The multi-layer perceptron includes three fully-connected layers with ReLU activation on the outputs of the first two layers. The input is the 2048-dim ResNet feature f of each image view. The hidden sizes of the first two layers are 512 and 128. The final layer will output the 42-dim semantic feature y that represents the areas of each semantic class. After the linear layers, we use the sigmoid function σ to convert the output to the ratio of areas. x1 = ReLU(A1f + b1) (6) x2 = ReLU(A2x1 + b2) (7) y = σ(A3x2 + b3) (8) The model is trained with ground truth semantic areas yAREA (normalized to [0, 1]) and only the views in training environments are used in training. We minimize the binary cross-entropy loss between the ground truth areas {y∗i } and the predicted areas {yi}, where i indicate the i-th semantic class. L = − ∑ i (y∗i log yi + (1− y∗i ) log (1− yi)) (9) Dropout layers with a probability of 0.5 are added between fully-connected layers while training. The sigmoid function σ and the cross-entropy loss are combined to improve numerical stability. After the model is fitted, we freeze the weight and use it to predict the semantic features of all seen and unseen environments (i.e., environments for training, val-seen, and val-unseen data). The predicted features are then used as the input of our neural agent model for different datasets (i.e., R2R, R4R, and CVDN), and the neural agent models are the same except we change the input dimension from 2048 (the dimension of ResNet features) to 42 (the number of semantic classes).
1. What are the main contributions of the paper regarding environmental bias and overfitting? 2. How effective are the proposed methods in reducing the performance gap between seen and unseen data? 3. Is the metric used to measure success on the tasks appropriate, or should the focus be on raw performance on unseen data? 4. How well do the semantic features generalize to new environments, and are there any concerns about their efficacy with more training data? 5. Can the authors provide more implementation details and structure for the multilayer perceptron used in Sec. 6.3? 6. Why was the 'Touchdown' environment included in Table 1 if the proposed technique was not evaluated? 7. Could the figure captions be improved by describing what the reader should take away or learn from the figure, rather than just what is shown? 8. Was it necessary to use a multilayer perceptron for the Semantic Segmentation learned features, or would an open-source implementation have been a better choice?
Review
Review This paper has two main contributions. First, the authors perform an extensive study to understand the source of what they refer to as 'environment bias', which manifests itself as a gap in performance between environments used for training and unseen environments used for validation. The authors conclude that of the three sources of information provided to the agent (the natural language instruction, the graph structure of the environment, and the RGB image), the RGB image is the primary source of the overfitting. The second contribution is to use semantic information, compact statistics derived from (1) detected objects and (2) semantic segmentation, to replace the RGB image and provide input to the system in a way that maintains state-of-the-art performance but shrinks the performance gap between the seen and unseen data. This paper has some pretty exhaustive treatment diagnosing the source of the agent's 'environment bias' (which, as I discuss below, I believe is more accurately referred to as 'overfitting') in Sec. 4. To me, this is this highlight of the paper, and some interesting work; the investigation of the behavior of the system is interesting and informative. It provides a framework for thinking about how to diagnose this behavior and identify its source. The authors use this rather extensive study to motivate the need for new features (semantic features) to replace the RGB image that their investigation finds is where much of this 'environment bias' is located. Unfortunately, it is here that the paper falls flat. The authors proposal methods perform nominally better on the tasks being investigated, but much of the latter portion of the paper continues to focus on the 'improvement' in the metric they use to diagnose the 'bias'. As I mention below, the metric for success on these tasks is performance on the unseen data, and, though an improvement on their 'bias' metric is good anecdotal evidence their proposed methods are doing what they think, the improvements in this metric are largely due to a nontrivial decrease in performance on the training data. Ultimately, this is not a compelling reason to prefer their method. I go into more details below about where I think some of the other portions of the paper could be improved and include suggestions for improvement. High-level comments: - I am uncertain that 'bias' is the right word to describe the effect under study. In my experience, environment bias (or, more generally, dataset bias) usually implies that the training and test sets (or some subset of the data) are distinct in some way, that they are drawn from different distributions. The learning system cannot identify these differences without access to the test set, resulting in poor performance on the 'unseen' data. In the scenario presented here, the environments are selected to be in the train/test/validation sets at random. As such, the behavior described here is probably more appropriately described as 'overfitting'. The shift in terminology is not an insignificant change, because using 'bias' to describe the problem incorrectly suggests that the data collection procedure is to blame, rather than a lack of data or an overparamatrized learning strategy; I imagine that more data in the training set (if it existed) could help to reduce the gap in performance the paper is concerned with. That being said, I imagine some language changes could be done to remedy this. - Perhaps the biggest problem with the paper as written is that I am not convinced that the 'performance gap' between the seen and unseen data is a metric I should want to optimize. This metric is instructive for diagnosing which component of the model the overfitting is coming from, and Sec. 4 (devoted to a study of this effect) is an interesting study as a result. However, beyond this investigation, reducing the gap between these two is not a compelling objective; ultimately, it is the raw performance on the unseen data that matters most. The paper is written in a way that very heavily emphasizes the 'performance gap' metric, which gets in the way of its otherwise interesting discussion diagnosing the source of overfitting and some 'strong' results on the tasks of interest. The criteria should be used to motivate newer approaches, rather than the metric we should value for its adoption. This narrative challenge is the most important reason I cannot recommend this paper in its current state. - Using semantic segmentation, rather than the RBG image, as input seems like a good idea, and the authors do a good job of motivating the use of semantics (which should show better generalization performance) than a raw image. However, the implementation in Sec. 6.3 raises a few questions. First (and perhaps least important) is that 6.3 is missing some implementation details. In this section, the authors mention that 'a multilayer perceptron is used' but do not provide any training or structure details; these details should be included in an appendix. More important is the rather significant decrease in performance on the seen data (11% absolute) when switching to the learned method. Though the performance on the unseen data does not change much, it raises some concerns about the generalizability of the learning approach they have used: in an ideal world with infinite training data, the network would perfectly accurately reproduce the ground truth results, and there should be no difference between the two. Consequently, the authors should comment on the discrepancy between the two and the limits of the learned approach, which I worry may limit its efficacy if more training data were added. Smaller comments: - I do not fully understand why the 'Touchdown' environment was included in Table 1, since the learned-semantic agent proposed in the paper was not evaluated. The remainder of the experiments are sufficient to convince the reader that this gap exists, and I would recommend either evaluating against the proposed technique or removing this task from the paper. - Figure captions should be more 'self-contained'. Right now, they describe only what is shown in the figure. They should also describe what I, as a reader, should take away or learn from the figure. This is not always necessary, but in my experience improves readability, so that the reader does not need to return to the body of the text to understand. - The use of a multilayer perceptron for the Semantic Segmentation learned features, trained from scratch, stands out as a strange choice, when there are many open source implementations for semantic segmentation exist and could be fine-tuned for this task; a complete investigation (which may be out of scope for the rebuttal period) may require evaluating performance of one of these systems.
ICLR
Title Diagnosing the Environment Bias in Vision-and-Language Navigation Abstract Vision-and-Language Navigation (VLN) requires an agent to follow naturallanguage instructions, explore the given environments, and reach the desired target locations. These step-by-step navigational instructions are extremely useful in navigating new environments that the agent does not know about previously. Most recent works that study VLN observe a significant performance drop when tested on unseen environments (i.e., environments not used in training), indicating that the neural agent models are highly biased towards training environments. Although this issue is considered as one of the major challenges in VLN research, it is still under-studied and needs a clearer explanation. In this work, we design novel diagnosis experiments via environment re-splitting and feature replacement, looking into possible reasons for this environment bias. We observe that neither the language nor the underlying navigational graph, but the low-level visual appearance conveyed by ResNet features directly affects the agent model and contributes to this environment bias in results. According to this observation, we explore several kinds of semantic representations which contain less low-level visual information, hence the agent learned with these features could be better generalized to unseen testing environments. Without modifying the baseline agent model and its training method, our explored semantic features significantly decrease the performance gap between seen and unseen on multiple datasets (i.e., 8.6% to 0.2% on R2R, 23.9% to 0.1% on R4R, and 3.74 to 0.17 on CVDN) and achieve competitive unseen results to previous state-of-the-art models. 1 INTRODUCTION Vision-and-Language Navigation (VLN) tests an agent’s ability to follow complex natural language instructions as well as explore the given environments, so as to be able to reach the desired target locations. As shown in Fig. 1, the agent is put in an environment and given a detailed step-by-step navigational instruction. With these inputs, the agent needs to navigate the environment and find the correct path to the target location. In this work, we focus on the instruction-guided navigation (MacMahon et al., 2006; Anderson et al., 2018b; Misra et al., 2018; Blukis et al., 2018; Chen et al., 2019c) where detailed step-by-step navigational instructions are used (e.g., ‘Go outside the dining room and turn left ...’), in contrast to the target-oriented navigation (Gordon et al., 2018; Das et al., 2018; Mirowski et al., 2018; Yu et al., 2019) where only the target is referred (e.g., ‘Go to the kitchen’ or ‘Tell me the color of the bedroom’). Although these step-by-step instructions are overdetailed when navigating local areas (e.g., your home), they are actively used in unseen environments (e.g., your friend’s house, a new city) where the desired target is usually unknown to navigational agents. For this purpose, testing on unseen environments which are not used during agent-training is important and widely accepted by instruction-guided navigation datasets. Recent works propose different methods to improve generalizability of agents on these unseen testing environments; and most of the existing works (Anderson et al., 2018b; Wang et al., 2018b; Fried et al., 2018; Wang et al., 2019b; Ma et al., 2019a;b; Tan et al., 2019; Huang et al., 2019; Hu et al., 2019) observe a significant performance drop from seen environments (i.e., the environments used in training) to unseen environments (i.e., the environments not used in training), which indicates a strong bias in the model towards the training environments. While this performance gap is emphasized as one of the major challenges in current VLN research, the issue is still left unresolved and waits for an explicit explanation. Thus, in this paper, we aim to answer three questions to this environment bias: 1. Where (i.e., in which component) is the bias located? 2. Why does this bias exist? 3. How to eliminate this bias? To locate where the bias is, we start by showing that natural-language navigational instructions and underlying navigational graphs are not direct reasons for this performance gap. We then investigate the effect of environments on the agent’s performance. In order to conduct a detailed analysis, we resplit the environment and categorize the validation data into three sets based on their visibility to the training set: path-seen data intersecting with the training paths, path-unseen data using the training environments but away from the training paths, and env-unseen data using unseen environments (environments not used in training). By showing that the results gradually decrease from path-seen data to env-unseen data, we characterize the environment bias at three levels: path level, region level, and environment level. These three levels of environment biases indicate strong ‘spatial localities’ in the tasks of VLN, which are intuitively reasonable because environments and regions (e.g., houses and cities) usually have their own styles when built or decorated. We next want to analyze the detailed reason why this locality would further lead to a gap in seen versus unseen results. Our hypothesis is that the low-level information carried by the ResNet features (He et al., 2016) is the reason. To keep minimal low-level visual information and promote more high-level semantic information, we replace the ResNet features with the 1000 ImageNet classification probabilities. Although the semantic information encoded by these features is not accurate because of the shifted domain of images and labels, the same model with ImageNet-Labels features performs surprisingly well on various VLN datasets (i.e., Room-to-Room, R4R, and CVDN1). Most importantly, these noisy semantic features effectively eliminate the performance gap between seen and unseen environments, which suggests that the environment bias is attributed to the ResNet features as our hypothesis. Following the practice in using ImageNet labels as semantic features, we further provide a discussion on how the environment bias could be eliminated. For this, we employ advanced high-level semantic features which are more rational for the VLN domain. We explore three kinds of semantic features: (1) areas of detected object labels (Ren et al., 2015); (2) ground truth semantic views (Chang et al., 2017); and (3) learned semantic view features. We show that all of these semantic features significantly reduce the environment bias in multiple datasets and also achieve strong results in testing unseen environments. We hope this work encourages more investigation and research into improving the generalization of vision-language models to unseen real-world scenarios. 2 RELATED WORK Vision-and-Language Navigation: Vision-and-language navigation is an emerging task in the vision-and-language area. A lot of datasets have been proposed in recent years, such as Roomto-Room (Anderson et al., 2018b), Room-for-Room (Jain et al., 2019), TouchDown (Chen et al., 2019c), CVDN (Thomason et al., 2019b), RERERE (Qi et al., 2019), House3D (Wu et al., 2018) and EQA (Das et al., 2018). Recent works (Thomason et al., 2019a; Wang et al., 2018b; Fried et al., 1We did not test these semantic features on touchdown (Chen et al., 2019c) since the images are not released. 2018; Wang et al., 2019b; Ma et al., 2019a;b; Tan et al., 2019; Hu et al., 2019; Ke et al., 2019; Anderson et al., 2019) focusing on improving the performance of navigation models, especially in unseen testing environments, have helped to increase the navigational success rate. Domain Adaptation: The general setup of domain adaption contains two sets of data samples {xi}xi∈X and {yi}yi∈Y from two domains X and Y . Based on these samples, we could learn domain invariant feature with adversarial training (Goodfellow et al., 2014; Zhu et al., 2017; Long et al., 2018; Wang et al., 2019a; Hosseini-Asl et al., 2019; Zhang et al., 2019; Gong et al., 2019; Chen et al., 2019b) or learn a transfer function f : X → Y (Wang et al., 2018a; Chen et al., 2019a; Rozantsev et al., 2018). However, samples from the target domain may not be available (e.g., the testing environments in navigation should not be used in training) in applications. Thus, we try to give an interpretable explanation to why performance varies in different domains and design a robust feature for it without deliberately considering the target domain. Two methods in VLN, RCM (Wang et al., 2019b) and EnvDrop (Tan et al., 2019), explore the possibility of domain adaptation. Both works take the testing environments in training while RCM also uses testing instructions. Domain Generalization: In domain generalization (Blanchard et al., 2011), the goal is to predict the labels in the previous unseen domain. Similar to the test setting of VLN tasks, the testing data is unrevealed in training. Works have been proposed to learn the common features of the training domain (Muandet et al., 2013; Blanchard et al., 2017; Li et al., 2017; 2018; Carlucci et al., 2019; Deshmukh et al., 2019). In this paper, we focus on the domain generalization problem in VLN task, and try to find the reasons for the failures. 3 VISION-AND-LANGUAGE NAVIGATION AND ITS ENVIRONMENT BIAS We first introduce the task of vision-and-language navigation (VLN) and briefly describe the neural agent models used in our work. We next survey previous works on multiple indoor navigation datasets to show that the environment bias is widely observed in current VLN research. Lastly, we claim that this bias also exists in the outdoor navigation tasks, if the agent is tested on unseen regions. 3.1 VISION-AND-LANGUAGE NAVIGATION Tasks: As shown in Fig. 1, the goal of the VLN task is to train an agent to navigate a certain type of environments {E} (e.g., indoor or outdoor environments) given the instruction I. Each environment E is an independent space, such as a room or a house, and consists of a set of viewpoints. Each viewpoint is represented as a panoramic image and can be decomposed into separate views {o} as inputs to the neural agent models. The viewpoints and their connectivity form the navigational graph. In practice, after being placed at a particular viewpoint and given the instruction in the beginning, at each time step, the agent can observe the panoramic image of the viewpoint where it is located, and choose to move along an edge of the graph to the next node (i.e., viewpoint) or stop. This navigational process produces a path (i.e., a list of viewpoints), and the performance of the agent is evaluated by whether it reaches the target location that the instruction indicates in the end. Neural Agent Models: Most instruction-guided navigational agents are built based on attentive encoder-decoder models (Bahdanau et al., 2015). The encoder reads the instructions while the decoder outputs actions based on the encoded instructions and perceived environments. Since the main purpose of this work is to understand the environment bias in vision-and-language navigation, we use a minimal representative neural agent model that achieves comparable results to previous works. Specifically, we adopt the panoramic-view neural agent model in Fried et al. (2018) (‘Follower’) with modifications from Tan et al. (2019) as our baseline model. We also exclude advanced training techniques (i.e., reinforcement learning and data augmentation) and only train the agent with imitation learning in all our experiments for the same purpose. More details in original papers. 3.2 ENVIRONMENT BIAS IN INDOOR NAVIGATION In order to evaluate the generalizability of agent models, indoor vision-and-language navigation datasets (e.g., those collected from Matterport3D (Chang et al., 2017)) use disjoint sets of environments in training and testing. Most of the datasets provide two validation splits to verify the agent’s performance in both sets of environments: validation seen, which takes the data from training environments, and validation unseen, whose data is from new environments apart from the training environments. In the first part of Table 1, we list most of the previous works on the Room-to-Room dataset (Anderson et al., 2018b) and report the success rate under greedy decoding (i.e., without beam-search) on validation seen and validation unseen splits. The large absolute gaps (from 30.9% to 9.7%) between the results of seen and unseen environments show that current neural agent models on R2R suffer from environment bias2. Besides Room-to-Room (R2R), we also analyze two newly-released indoor navigation datasets that were also collected from Matterport3D environments: Room-forRoom (R4R) (Jain et al., 2019) and Cooperative Vision-and-Dialog Navigation (CVDN) (Thomason et al., 2019b). As shown in the second and third parts of Table. 1, results drop significantly from seen to unseen environments (i.e., 26.9% on R4R and 3.74 on CVDN), indicating that agent models also suffer from the environment bias in these datasets. Lastly, we show the results (denoted as ‘ours’ in Table. 1) when the environment bias (reason analyzed in Sec. 5) is effectively eliminated by our learned semantic features (described in Sec. 6.3). As a result, the performance gaps are effectively decreased on all three datasets without changing the model and learning hyper-parameters, compared to our baselines (denoted as ‘Our baseline’) and previous works 3. 3.3 ENVIRONMENT BIAS IN OUTDOOR NAVIGATION Since the three indoor navigational datasets in previous sections are collected from the Matterport3D environments (Chang et al., 2017), in order to show that the environment bias is a general phe- 2Our work’s aim is to both close the seen-unseen gap while also achieving competitive unseen results. Note that Anderson et al. (2019) also achieve 0% gap but at the trade-off of low unseen results. There is also another recent work by Ke et al. (2019) but they do not report val-seen results from non-beam-search methods. 3As for another major evaluation metric on the R4R dataset, Coverage weighted by Length Score (CLS), we also observe a similar phenomenon in performance gap; and our methods can also eliminate this gap from 19.2 to 1.5 and achieve competitive state-of-the-art unseen CLS results (34.7). nomenon also existing in other kinds of environments, we investigate the outdoor navigation task from Touchdown dataset (Chen et al., 2019c), whose environments are taken from New York City. In the original data splits of Touchdown, the environment is not specifically divided into seen and unseen and only involved one city. Thus the trained agent is only tested on the training environments (similar to validation seen split). To reveal the environment bias in Touchdown dataset, we split the city environment according to latitude and create two sub-environments: ‘training’ and ‘unseen’. The data are then re-split into training, val-seen, and val-unseen, accordingly. We adapt our baseline R2R agent model with additional convolutional layers to fit this new task. As shown in the last part of Table. 1, when experimenting on the original data split, our baseline model achieves state-of-theart results on the original ‘dev’ set and ‘test’ set, proving the validity of our model in this dataset. However, the results on our re-split data (denoted as ‘Our baseline (seen/unseen split)’) still show a big drop from the ’training’ to the ’unseen’ sub-environment (from 17.5% to 5.3%), indicating that environment bias is a broad issue. 4 WHERE: THE EFFECT OF DIFFERENT TASK COMPONENTS In Sec. 3, we showed that current neural agent models are biased towards the training environments on multiple vision-and-language navigation (VLN) datasets. In this section, our goal is to locate the component of VLN tasks which this environment bias is attributed to. As one of the early-released and well-explored datasets of VLN, Room-to-Room (R2R) dataset (Anderson et al., 2018b) is used as the diagnosing dataset in the experiments. We start by showing that two possible candidates, the natural language instructions and the underlying navigational graph, do not directly contribute to the environment bias. Then the effect of visual environments is analyzed in detail. 4.1 THE EFFECT OF NATURAL-LANGUAGE NAVIGATIONAL INSTRUCTIONS A common hypothesis is that the navigational instructions for unseen environments (e.g., val unseen) are much different from the training environments (i.e., training and val seen) due to the different objects and layouts in new environments; and this lingual difference thus leads to the performance gap. In this section, we analyze the distributions of success rate with regard to the relationship between validation data’s instructions and training instructions. In order to quantitatively evaluate this relationship, we define the ‘distances’ from a validating instruction to all training instructions as the phrase-matching metric. Suppose x is a validating datum, T is the training set, and inst(x) is the instruction of the datum x, we use ROUGE-L (Lin, 2004) and BLEU-4 (Papineni et al., 2002) to Train & Val Seen Re-splitting Train & Val Path-seen Val Path-unseen X-splitting line Figure 3: Graph split: left is original data and right is re-splitting data. Black vertices are viewpoints visited during training; red paths are val seen / val path-seen; blue paths are val path-unseen. calculate this ‘distance’: disROUGE(x,T) = min t∈T ROUGE-L (inst(x), inst(t)) (1) disBLEU(x,T) = BLEU-4 ( inst(x), {inst(t)}t∈T ) (2) where we consider all the training instructions as references in calculating the BLEU-4 score. We show the distributions of success rates and distances in Fig. 2. As opposed to the hypothesis, we do not observe a significant difference between the distributions of ‘distances’ (as shown in Fig. 2 (a, b)) on seen validation and unseen validation. For the success rate distributions (in Fig. 2(c,d)), the performance is better on instructions with smaller ‘distances’ (i.e., higher BLEU-4/ROUGE-L scores w.r.t. the training instructions) on both validation splits. However, comparing two splits, with the same ‘distance’ to training instructions, seen validation still significantly outperforms the unseen validation set on success rate, which implies the existence of other reasons rather than language attributed to this performance gap. 4.2 THE EFFECT OF UNDERLYING NAVIGATIONAL GRAPH As shown in Fig. 3, an environment could be considered as its underlying navigational graph with visual information (as in Fig. 1). In order to test whether the agent model could overfit to these navigational graphs (and thus be biased towards training environments), we follow the experiments in Hu et al. (2019) to train the agent without visual information. Specifically, we mask out the ResNet features with zero vectors thus the agent could only make the decision based on the instructions and the navigational graph. With our baseline model, the success rate is 38.5% on validation seen and 41.0% on validation unseen in this setting, which is consistent with the finding in Hu et al. (2019). Besides showing the relatively good performance of unseen split without visual contents (similar to Thomason et al. (2019a) and Hu et al. (2019)), we also want to emphasize the low performance gap between seen and unseen environments (2.5% compared to the > 10% gap in usual). Hence, we claim that the underlying graph is not a dominant reason for the environment bias. 4.3 THE EFFECT OF VISUAL ENVIRONMENTS To show how the visual environments affect the agent’s performance, we analyze the results on unseen environments and in different spatial regions of the training environments. In order to give a detailed characterization of the effect of environments, we are going to reveal the spatial localities which are related to the agent’s performance at three different levels: • Path-level Locality: Agents are better at paths which intersect with the training paths. • Region-level Locality: Agents are better in regions which are closer to the training data. • Environment-level Locality: Agents perform better on training environments than on un- seen environments. And the existence of these spatial locality inspires us to find the direct cause of the problem in Sec. 3.2. However, the original split of data is not fine-grained enough to separately reveal these spatial localities. To better illustrate this, we visualize the data from one environment of the Roomto-Room dataset in Fig. 3, where the vertices are viewpoints with visual information and edges are valid connections between viewpoints. The vertices highlighted with dark-black indicate the viewpoints which are used in training paths, and the red edges are the connections covered by original val-seen paths. As shown in Fig. 3, nearly all viewpoints in val-seen paths (vertices connected to red lines) are used as viewpoints in training data (vertices marked by dark-black). We thus cannot categorize the path-level and region-level localities. To bypass this, we propose a novel re-splitting method to create our diagnosis data splits. Structural Data Re-splitting We employ two kinds of structural data splitting methods based on the horizontal or vertical coordinates, denoted as ‘X-split’ and ‘Z-split’, respectively. The ‘Zsplit’ intuitively separates different floors in the houses and ‘X-split’ creates separate areas. When applying to the training environments in R2R dataset, we use one side of the splitting line (see the ‘X-splitting line’ Fig. 3) as the new training ‘environment’, and the other side as the path-unseen ‘environment’. In addition to this split of environments, we also re-split the original training data and val-seen data while keeping the val-unseen data the same. The data paths across the splitting line are dropped. As shown in the right part of Fig. 3, we create three new data splits: training split, val-path-seen split, and val-path-unseen split. The edges covered by the new val-path-unseen split are highlighted in blue, while the color style of training split and val-path-seen split (‘Black’ for viewpoints in training and ‘Red’ for edges in val path-seen) are the same. Since the amount of original val-seen data are inadequate to fill two new validation sets (val path-seen and val pathunseen), we bring some (original) training data into our new validation splits. The overall statistics of original splits and our new splits are shown in Table 2.4 Existence of Path-level and Environment-level Localities For both splitting methods, we train our baseline model on the newly-split training set and evaluate on our three validation sets (denoted as ‘X-split’ or ‘Z-split’ rows in Table 2). The results of our baseline model on the original R2R (denoted as ‘R2R’ rows) splits are listed for comparison. As shown in Table. 2, the agent performs better on val path-seen than val path-unseen, which suggests that a path-level locality exists in current VLN agent models. Meanwhile, the results on val path-unseen are further higher than val env-unseen and it indicates the environment-level locality which is independent of the path-level locality. Existence of the Region-level Locality To further demonstrate region-level locality, we study how the success rate changes in different regions of the environment with respect to their distances to the training data, which is similar to the analysis of language ‘distance’ in Sec. 4.1. We first calculate the point-by-point shortest paths using the Dijkstra’s algorithm (Dijkstra, 1959), where the shortest distances between viewpoints v and v′ are denoted as the graph distance disGRAPH(v, v′). Based on this graph distance, we define the viewpoint distance disVIEWPOINT from a viewpoint v to the training 4We only split the environments whose data contains substantial amount, thus make sure that the remaining training data is still adequate for training strong models. data T as v’s minimal graph distance to a viewpoint v′ in training data. We then define the path distance disPATH from a validating data x to the whole training data T as the maximal viewpoint distance in the path of x: disPATH(x,T) = max v∈path(x) disVIEWPOINT(v,T) (3) = max v∈path(x) minv′∈path(t) ∀t∈T disGRAPH (v, v ′) (4) We compute this path distance between paths in the env-seen validation set and training environments in our re-split data. As shown in Fig. 4, the success rate declines as the path moves further from the training environment on both re-splitting methods (i.e., ‘X-split’ and ‘Z-split’). As conclusion, the closer the path to the training data, the higher the agent performance is, which suggests the existence of region-level locality. 5 WHY: WHAT INSIDE THE ENVIRONMENTS CONTRIBUTES TO THE BIAS? In Sec. 4, we locate the cause of performance gap in visual environments by excluding other potential reasons and categorizing the spatial localities. However, there are still multiple possible aspects inside the environment which could lead to these spatial localities, e.g., the object layout convention and the room connections. The agent model could be biased towards the training environments by over-fitting or memorizing these environment-specific characteristics. In this section, we want to identify which aspect directly contributes to the bias and draw the following conclusion: the environment bias is attributed to low-level visual information carried by the ResNet features. We first show an experiment that effectively decreases the gap between seen and unseen environments with minimal model modifications. We then clarify our conclusions based on the findings. 5.1 AN INVESTIGATION EXPERIMENT: IMAGENET LABELS AS VISUAL FEATURES Suspecting that the over-fitting happens when the agent over-learns low-level features, we hope to find the replacement of ResNet 2048-features that contain minimal low-level information while preserving distinguishable visual contents. The most straightforward replacement is that instead of using mean-pooled features, we inject the frozen 1000-way classifying layer in ResNet pre-training, and use the probabilities of ImageNet labels as visual features. Shown as ‘ImageNet’ in Table. 3, the probability distribution almost closes the gap between seen and unseen. These results further constrain the reason of environment bias to the low-level ResNet features of image views. Combining with the findings of spatial localities, we suggest that environments (i.e., houses) and regions (i.e., rooms) usually have their own ‘style’. Thus the same semantic label (captured by ImageNet-1000 features) has different visual appearances (captured by ResNet features) in different environments or regions. As a result, ImageNet-1000 features, in spite of being noisy, are not distracted by low-level visual appearance and could generalize to unseen environments, while ResNet features could not. Although these ImageNet-1000 features decrease the performance gap, it has a disagreement with the VLN domain so that the validation unseen results of R4R and CVDN are slightly worse than baseline (and not much better for R2R). Hence it motivates us to find better semantic representations of environmental features that can both close the seen-unseen gap while also achieving state-of-theart on unseen results (which we discuss next). 6 HOW: METHODOLOGY TO FIX THE ENVIRONMENT BIAS In the previous section (Sec. 5), we found that the environment bias is related to the low-level visual features (i.e., 2048-dim ResNet features). Following the findings we observed in Sec. 5.1, we build our agent on the features which are more correlated to the VLN environmental semantics than the ImageNet label features in Sec. 5.1. We first demonstrate our baseline results on three VLN datasets and then explore the advanced semantic feature replacements. As shown in Table 3, these advanced semantic features could effectively reduce the performance gap between seen and unseen environments and improve the unseen results compared to our strong baselines. The effectiveness of these semantic features supports our explanation of the environment bias in Sec. 5 and also suggests that future work in VLN tasks should think about such generalization issues. 6.1 BASELINE In our baseline model, following the previous works we use the standard ResNet features as the representation of environments (Anderson et al., 2018b; Jain et al., 2019; Thomason et al., 2019b). These features come from the mean-pooled layer after the final convolutional layer of ResNet152 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). As shown in ‘Baseline’5 rows of Table. 3, val-seen results are significantly higher than val-unseen results in all three datasets. Note that our baseline method takes the ‘feature dropout’ technique demonstrated in Tan et al. (2019) (without back translation): the ResNet features are randomly masked by zero before used as inputs of the agent. Without this ‘feature dropout’ (denoted as ‘ResNet NoDrop’ in Table. 3), the gaps will increase in R2R and R4R, which suggests that this ‘feature dropout’ technique also helps to eliminate the low-level visual information over-fitting as we discussed in Sec. 5. However, the performance gap is still large, which leads us to the following discussions of semantic features. 6.2 DETECTED OBJECTS AREAS During navigation, the objects in the environments are crucial since their matchings with the instruction often indicate the locations that can guide the agent, thus object detection results of the environments can provide relevant semantic information. In our work, we utilize the detection information generated by Faster R-CNN (Ren et al., 2015) to create the feature representations. Comparing to 5We use our baseline agent model (in Sec. 3.1) for R2R and R4R. For CVDN, we take the official baseline code in https://github.com/mmurray/cvdn. ImageNet-1000 features (Sec. 5.1), these detection features include more environmental information since the viewing images in VLN usually contain multiple objects. Instead of directly using classification probabilities of the labels from ResNet and different from the approach in Hu et al. (2019) who utilized the embeddings of detected labels, we design our detection features f DETECT of each image view as the sum of the areas of detected objects weighted by detection confidence: f DETECT=[ac1 , ac2 , . . . , acn ]; aci= ∑ obj is ci Area(obj) · Conf(obj) (5) where the ci and aci are the label and feature of each detected object, Area(∗) and Conf(∗) are the area and confidence of each object. For implementation details, we use the Faster R-CNN (Ren et al., 2015) trained on Visual Genome (Krishna et al., 2017) provided in Bottom-Up Attention (Anderson et al., 2018a). To eliminate the labels irrelevant to VLN task, we calculate the total areas of each detection object among all environments and pick the labels that take up a relatively large proportion of the environments, creating features of dimension 152.6 Denoted as ‘Detection’ in Table 3, the performance gap is diminished with these detection features compared to baselines in all three datasets, indicating that changing the features to a higher semantic level has a positive effect on alleviating the environment bias. Meanwhile, the improvement of unseen validation results on R2R an R4R datasets suggests the better efficiency in the VLN task than the ImageNet labels. 6.3 SEMANTIC SEGMENTATION Although the detection features can provide adequate semantic information for the agent to achieve comparable results as the baseline model, they do not fully utilize the visual information where the content left over from detection may contain useful knowledge for navigation. A better semantic representation is the semantic segmentation, which segments each view image on the pixel level and gives the label to each segment region, allowing us to utilize the semantics from the entire environment. Matterport3D (Chang et al., 2017) dataset provides the labeled semantic segmentation information of every scene and we take the rendered images from Tan et al. (2019)7. A comparison example of RGB images and semantic views is available in the Appendix. Since the semantic segmentation images are fine-grained and blurry in boundaries, we follow the design of detection features, using the areas of semantic classes in each image view as the semantic features (confidence is excluded since semantic segmentation does not provide this value). The areas are normalized to [0, 1] by dividing the area of the whole image region. We first assume that the semantic information is provided as additional environmental information and the results of the model using the ground truth semantic areas are shown in the ‘ground truth’ rows in Table. 3. We next study the situation where the semantic information is not available in testing environments thus the information needs to be learned from training environments. Thus we train a separate multi-layer perceptron to predict the areas of these semantic classes (details in Appendix), and the results of the model with these predicted semantics as features are shown in ‘learned’. As shown in Table. 3, both ‘ground truth’ and ‘learned’ semantic representations bring the performance of seen and unseen closer comparing to the baseline model, and the smallest performance gaps come from learned semantic segmentation features in all three datasets. The highest validation unseen success rates among all the proposed feature representations are also produced by semantic segmentation features, ‘learned’ semantic for R4R and ‘ground truth’ semantic for R2R and CVDN. Overall, among all the semantic representations we have explored, the semantic segmentation features are most effective in eliminating the environment bias. 7 CONCLUSION In this paper, we focus on studying the performance gap between seen and unseen environments widely observed in vision-and-language navigation (VLN) tasks, trying to find where and why this environment bias exists and provide possible initial solutions. By designing the diagnosis experiments of environment re-splitting and feature replacement, we locate the environment bias to be in the low-level visual appearance; and we discuss semantic features that decrease the performance gap in three VLN datasets and achieve state-of-the-art results. 6Note that this detection feature dimension 152 coincidentally is the same as the number of layers in ResNet, but there is no correlation. 7The rendered semantic views are downloaded from https://github.com/airsplay/R2R-EnvDrop. A APPENDIX A.1 EXAMPLES OF RGB IMAGES AND SEMANTIC VIEWS In Fig. 5, we show a rendered semantic view from Tan et al. (2019) and its original RGB image. Different colors indicate different semantic segmentation areas and 40 semantic labels are considered in the Matterport3D dataset Chang et al. (2017). A.2 DETAILS OF ‘LEARNED’ SEMANTIC TRAINING We use a multi-layer perceptron over the ResNet features to generate the ‘learned’ semantic features. The multi-layer perceptron includes three fully-connected layers with ReLU activation on the outputs of the first two layers. The input is the 2048-dim ResNet feature f of each image view. The hidden sizes of the first two layers are 512 and 128. The final layer will output the 42-dim semantic feature y that represents the areas of each semantic class. After the linear layers, we use the sigmoid function σ to convert the output to the ratio of areas. x1 = ReLU(A1f + b1) (6) x2 = ReLU(A2x1 + b2) (7) y = σ(A3x2 + b3) (8) The model is trained with ground truth semantic areas yAREA (normalized to [0, 1]) and only the views in training environments are used in training. We minimize the binary cross-entropy loss between the ground truth areas {y∗i } and the predicted areas {yi}, where i indicate the i-th semantic class. L = − ∑ i (y∗i log yi + (1− y∗i ) log (1− yi)) (9) Dropout layers with a probability of 0.5 are added between fully-connected layers while training. The sigmoid function σ and the cross-entropy loss are combined to improve numerical stability. After the model is fitted, we freeze the weight and use it to predict the semantic features of all seen and unseen environments (i.e., environments for training, val-seen, and val-unseen data). The predicted features are then used as the input of our neural agent model for different datasets (i.e., R2R, R4R, and CVDN), and the neural agent models are the same except we change the input dimension from 2048 (the dimension of ResNet features) to 42 (the number of semantic classes).
1. What are the main contributions and findings of the paper regarding vision-language navigation (VLN) models? 2. What are the potential sources of failure in VLN models when transferred to unseen environments, according to the authors? 3. How do the authors propose to improve the generalization of VLN models without significantly affecting their absolute performance? 4. What is the methodology used by the authors to compute BLEU in Section 4.1, and how does it differ from the usual corpus BLEU computation? 5. What is the reasoning behind the authors' conclusion that the environment bias is attributed to low-level visual information carried by the ResNet features? 6. Are there any grammatical errors or confusing phrasing in the review that could be clarified or rephrased?
Review
Review Summary: This paper provides a thorough analysis of why vision-language navigation (VLN) models fail when transferred to unseen environments. The authors enumerate potential sources of the failure--namely, the language, the semantic map, and the visual features--and show that the visual features are most clearly to blame for the failures. Specifically, they show that by removing the low-level visual features (e.g. the fc17 or similar) and replacing with various higher-level representations (e.g. the softmax layer of the pretrained CNN, or the output of a semantic segmentation system) dramatically improves generalization without a meaningful drop in absolute performance. Evaluation: The paper is easy to follow and interesting. Some results presented have been show previously (e.g. that removing visual features doesn't drastically hurt performance of VLN models) but overall, the paper presents the results in a clear and thorough manner that will be beneficial to the community. A few small questions/comments below. * I am confused by how you compute BLEU in Section 4.1. You say you compute corpus BLEU but Eq. 2 suggests you compute the BLEU for a single instruction against a set of training instructions. I think corpus BLEU is usually corpus vs. corpus (e.g. all generated sentences vs. all reference sentences) not one generated sentence against all reference sentences. Is this right? It also seems odd that your BLEU scores are distributed the way they are (Fig. 2). Can you explain why you did this the way you did? * nit: Sec. 5 heading. Your grammar is backwards. The question you are trying to express is "bias is attributed to what" not "what is attributed to bias". So heading should be "to what inside the environments is bias attributed" (which is admittedly a clunky title) * another nit: "suggest a surprising conclusion: the environment bias is attributed to low-level visual information carried by the ResNet features." --> idk that this is that surprising, it was kind of natural given the result that removing visual features entirely doesn't hurt performance and helps generalization. So maybe rephrase this sentence.
ICLR
Title Efficient, probabilistic analysis of combinatorial neural codes Abstract Artificial and biological neural networks (ANNs and BNNs) can encode inputs in the form of combinations of individual neurons’ activities. These combinatorial neural codes present a computational challenge for direct and efficient analysis due to their high dimensionality and often large volumes of data. Here we improve the computational complexity – from factorial to quadratic time – of direct algebraic methods previously applied to small examples and apply them to large neural codes generated by experiments. These methods provide a novel and efficient way of probing algebraic, geometric, and topological characteristics of combinatorial neural codes and provide insights into how such characteristics are related to learning and experience in neural networks. We introduce a procedure to perform hypothesis testing on the intrinsic features of neural codes using information geometry. We then apply these methods to neural activities from an ANN for image classification and a BNN for 2D navigation to, without observing any inputs or outputs, estimate the structure and dimensionality of the stimulus or task space. Additionally, we demonstrate how an ANN varies its internal representations across network depth and during learning. 1 INTRODUCTION To understand the world around them, organisms’ biological neural networks (BNNs) encode information about their environment in the dynamics of spikes varying over time and space. Artificial neural networks (ANNs) use similar principles, except instead of transmitting spikes they usually transmit a real-valued number in the range of [0, 1] and their dynamics are typically advanced in a step-wise, discrete manner. Both BNNs and ANNs adjust their internal structures, e.g., connection strengths between neurons, to improve their performance in learned tasks. This leads to encoding input data into internal representations, which they then transform into task-relevant outputs, e.g., motor commands. Combinatorial neural coding schemes, i.e., encoding information in the collective activity of neurons (also called ‘population coding’), is widespread in BNNs (Averbeck et al., 2006; Osborne et al., 2008; Schneidman et al., 2011; Froudarakis et al., 2014; Bush et al., 2015; Stevens, 2018; Beyeler et al., 2019; Villafranca-Faus et al., 2021; Burns et al., 2022; Hannagan et al., 2021) and long-utilized in ANNs, e.g., in associative memory networks (Little, 1974; Hopfield, 1982; Tsodyks & Feigel'man, 1988; Adachi & Aihara, 1997; Krotov & Hopfield, 2016). Advances in mathematical neuroscience (Curto & Itskov, 2008; Curto et al., 2019) has led to the development of analyses designed to understand the combinatorial properties of neural codes and their mapping to the stimulus space. Such analyses were initially inspired by the combinatorial coding seen in place cells (Moser et al., 2008), where neurons represent physical space in the form of ensemble and individual activity (Brown & Alex, 2006; Fenton et al., 2008). Place fields, the physical spatial areas encoded by place cells, can be arranged such that they span multiple spatial dimensions, e.g., 3D navigation space in bats (Yartsev & Ulanovsky, 2013). They can also encode for ‘social place’ (Omer et al., 2018), the location of conspecifics. Just as these spatial and social dimensions of place (external stimuli) may be represented by combinatorial coding, so too may other dimensions in external stimuli, such as in vision (Fujii & Ito, 1996; Panzeri & Schultz, 2001; Averbeck et al., 2006; Froudarakis et al., 2014; Fetz, 1997). In place cells, the term receptive field (RF) or place field may intuitively be thought of as a physical place. In the context of vision, for example, we may think of RFs less spatially and more abstractly as representing stimuli features or dimensions along which neurons may respond more or less strongly, e.g., features such as orientation, spatial frequency, or motion (Niell & Stryker, 2008; Juavinett & Callaway, 2015). Two neurons which become activated simultaneously upon visual stimuli moving to the right of the visual field may be said to share the RF of general rightward motion, for example. We may also think of RFs even more abstractly as dimensions in general conceptual spaces, such as the reward–action space of a task (Constantinescu et al., 2016), visual attributes of characters or icons (Aronov et al., 2017), olfactory space (Bao et al., 2019), the relative positions people occupy in a social hierarchy (Park et al., 2021), and even cognition and behaviour more generally (Bellmund et al., 2018). In the method described in Curto et al. (2019), tools from algebra are used to extract the combinatorial structure of neural codes. The types of neural codes under study are sets of binary vectors C ⊂ Fn2 , where there are n neurons in states 0 (off) and 1 (on). The ultimate structure of this method is the canonical form of a neural code CF (C). The canonical form may be analysed topologically, geometrically, and algebraically to infer features such as the potential convexity of the receptive fields (RFs) which gave rise to the code, or the minimum number of dimensions those RFs must span in real space. Such analyses are possible because CF (C) captures the minimal essential set of combinatorial descriptions which describe all existing RF relationships implied by C. RF relationships (whether and how RFs intersect or are contained by one-another in stimulus space) are considered to be implied by C by assuming that if two neurons become activated or spike simultaneously, they likely receive common external input in the form of common stimulus features or common RFs. Given sufficient exploration of the stimulus space, it is possible to infer topological features of the global stimulus space by only observing C (Curto & Itskov, 2008; Mulas & Tran, 2020). To the best of our knowledge, these methods have only been developed and used for small examples of BNNs. Here we apply them to larger BNNs and to ANNs (by considering the co-activation of neurons during single stimulus trials). Despite the power and broad applicability of these methods (Curto & Itskov, 2008; Curto et al., 2019; Mulas & Tran, 2020), two major problems impede their usefulness: (1) the computational time complexity of the algorithms to generate CF (C) is factorial in the number of codewords O(nm!)1, limiting their use in large, real-world datasets; and (2) there is no tolerance for noise in C, nor consideration given towards the stochastic or probabilistic natures of neural firing. We address these problems by: (1) introducing a novel method for improving the time complexity to quadratic in the number of neurons O(n2) by computing the generators of CF (C) and using these to answer the same questions; and (2) using information geometry (Nakahara & Amari, 2002; Amari, 2016) to perform hypothesis testing on the presence/absence of inferred geometric or topological properties of the stimulus or task space. As a proof of concept, we apply these new methods to data from a simulated BNN for spatial navigation and a simple ANN for visual classification, both of which may contain thousands of codewords. 2 PRELIMINARIES Before describing our own technical developments and improvements, we first outline some of the key mathematical concepts and objects which we use and expand upon in later sections. For more detailed information, we recommend referring to Curto & Itskov (2008); Curto et al. (2019). 2.1 COMBINATORIAL NEURAL CODES Let F2 = {0, 1}, [n] = {1, 2, . . . , n}, and Fn2 = {a1a2 · · · an|ai ∈ F2, for all i}. A codeword is an element of Fn2 . For a given codeword c = c1c2 · · · cn,, we define its support as supp(c) = {i ∈ [n]|ci ̸= 0}, which can be interpreted as the unique set of active neurons in a discrete time bin which correspond to that codeword. A combinatorial neural code, or a code, is a subset of Fn2 . The support of a code C is defined as supp(C) = {S ⊆ [n]|S = supp(c) for some c ∈ C}, which can be interpreted as all sets of active neurons represented by all corresponding codewords in C. Let ∆ be a subset of 2[n]. The subset ∆ is an abstract simplicial complex if for any S ∈ ∆, the condition S′ ⊆ S gives S′ ∈ ∆, for any S′ ⊆ S. In other words, ∆ ⊆ 2[n] is an abstract simplicial 1n is the number of neurons and m is the number of codewords. In most datasets of interest n ≪ m. complex if it is closed under inclusion. So, the simplicial complex for a code C can be defined as ∆(C) = {S ⊆ [n]|S ⊆ supp(c), for some c ∈ C} . A set S in a simplicial complex ∆ is referred to as an (|S| − 1)-simplex. For instance, a set with cardinality 1 is called 0-simplex (geometrically, a point), a set with cardinality 2 is called a 1-simplex (geometrically, an edge), and so on. Let S be an m-simplex in ∆. Any S′ ⊆ S is called a face of S. 2.2 SIMPLICIAL COMPLEXES AND TOPOLOGY Let C ⊆ Fn2 be a code and ∆(C) be the corresponding simplicial complex of C. From now on, we will use ∆ to denote the corresponding simplicial complex of a code C. Define ∆m as a set of m-simplices in ∆. Define Cm = { ∑ S∈∆m αSS | αS ∈ F2,∀S ∈ ∆m } . The setCm forms a vector space over F2 whose basis elements are all them-simplicies in ∆m.Now, define the chain complex C∗(∆,F2) to be the sequence {Cm}m≥0 . For any m ≥ 1, define a linear transformation ∂m : Cm → Cm−1, where for any σ ∈ ∆m, ∂m(σ) = ∑m i=0 σ i, with σi ∈ ∆m−1 as a face of σ, for all i = 0, . . . ,m. Moreover, the map ∂m can be extended linearly to all elements in Cm as follows ∂m ( ∑ S∈∆m αSS ) = ∑ S∈∆m αS∂m(S). Define the m-th mod-2 homology group of ∆ as Hm(∆,F2) = Ker (∂m) Im (∂m+1) for all m ≥ 1 and H0(∆,F2) = C0 Im (∂1) . Note thatHm(∆,F2) is also a vector space over F2, for allm ≥ 0. So, the mod-2m-th Betti number βm(∆) of a simplicial complex ∆ is the dimension ofHm(∆,F2). The βm(∆,F2) gives the number of m-dimensional holes in the geometric realisation of ∆. 2.3 CANONICAL FORM Let σ and τ be subsets of [n],where σ∩τ = ∅. The polynomial of the form ∏ i∈σ xi ∏ j∈τ (1−xj) ∈ F2[x1m. . . , xn] is called a pseudo-monomial. In a given ideal J ⊆ F2[x1, . . . , xn], a pseudomonomial f in J is said to be minimal if there is no pseudo-monomial g in J with deg(g) < deg(f) such that f = gh for some h ∈ F2[x1, . . . , xn]. For a given code C ⊆ Fn2 , we can define a neural ideal related to C as JC = ⟨ρc′ |c′Fn2 − C⟩, where ρc′ is a pseudo-monomial of the form∏ i∈supp(c′) xi ∏ j ̸∈supp(c′) (1− xj) . A set of all minimal pseudo-monomials in JC , denoted by CF (JC) or simply CF (C), is called the canonical form of JC . Moreover, it can be shown that JC = ⟨CF (C)⟩. Therefore, the canonical form CF (C) gives a simple way to infer the RF relationships implied by all codewords in C. One way to calculate the CF (C) is by using a recursive algorithm described in Curto et al. (2019). For a code C = {c1, . . . , c|C|}, the aforementioned algorithm works by constructing canonical forms CF (∅), CF ({c1}) , CF ({c1, c2}) , . . . , CF (C) , respectively. In each stage, the algorithm evaluates polynomials, checks divisibility conditions, and adds or removes polynomials from a related canonical form. 3 METHODS Our main methodological contributions are: (1) improving the computational complexity of the analyses relying on computing CF (C) (see Algorithm 1); and (2) using information geometry to identify whether identified algebraic or topological features are statistically significant. 3.1 COMPUTING AND ANALYSING THE CANONICAL FORM’S GENERATORS We may perform the same analyses as in Curto et al. (2019) in quadratic time by using Algorithm 1 to construct the generators of CF (C) rather than constructing CF (C) itself (as in Algorithm 2 of Curto et al. (2019)). Illustrative of this efficiency, representative experimental data with 25 neurons and 46 codewords took < 1 second to analyse on a high-end desktop PC (Intel i7 CPU and 64GB of memory), compared to 2 minutes 57 seconds using Algorithm 2 from Curto et al. (2019). Algorithm 1 Algorithm for computing generators of CF (C) Input: M = C ⊂ Fn2 as a patterns× neurons matrix Initialize: D ← empty list ▷ Stores the monomials. P ← empty list ▷ Stores the mixed monomial constructor tuples (σ,τ ). B ← empty list ▷ Stores the mixed monomial constructor tuples (τ ,σ). for each column i of M do for each column j of M do s← ∑ k(i · j)k if s < 1 then ▷ The pair i, j have disjoint receptive fields. append {i, j} to D else j′ ← j − 1 b← ∑ k=1(i · j′)k if b = 0 then ▷ The receptive field of j is a subset of receptive field of i. append (i, j) to P append (j, i) to B end if end if end for end for Generating desired elements of JC is then straightforward: monomials are supersets of disjoint pairs (from D) where each pair set has one element shared with at least one other disjoint pair set in the superset; mixed monomials are all possible combinations of first (σ set) and second (τ set) elements in the tuples of P (or vice-versa for B) – we do not allocate all of these elements but instead store the set constructors; and the negative monomial appears if and only if the all 1s codeword exists (which involves a simple summing check on columns of M ). 3.2 INFORMATION GEOMETRY FOR COMBINATORIAL NEURAL CODES Let N be the finite number of time bins for data of the neural activity patterns on n neurons. For any S ⊆ [n], let v(S) ∈ Fn2 , where supp(v(S)) = S and Pv(S) = #{v(S)} N . We would like to find the parameters θ = ( θS1 , θS2 , . . . , θS2n−1 ) , where Si ⊆ [n], Si ̸= ∅, and S2n−1 = [n], such that the following exponential function P(x, θ) = exp ∑ S⊆[n],S ̸=∅ θSxS − ψ , where xS = ∏ i∈S xi and ψ = − log(Pv(∅)), describes a neural activity pattern from the given neural activity data. We can calculate θS using the following formula for any S ⊆ [n], where S ̸= ∅, θS = log ( Pv(S) Pv(∅) ∏ S′⊊S,S′ ̸=∅ exp (θS′) ) , ηW = ∑ S⊆[n],S⊇W Pv(S). Given a θ-coordinate, we can calculate the associated G(θ) = ( gθA,B ) A,B⊆[n] matrix using the following formula, gθA,B = Eθ (XAXB)− ηAηB = ∑ W⊇A∪B e−ψ ∏ W ′⊆W W ′ ̸=∅ eθW ′ − ηAηB , = ∑ W⊇A∪B e−ψe ∑ W ′⊆W W ′ ̸=∅ θW ′ − ηAηB , where ψ = − log(Pv(∅)). Example 3.1. Let n = 4, A = {1, 2}, and B = {2, 4}, then gθA,B = ∑ W⊇{1,2,4} e−ψ ∏ W ′⊆W W ′ ̸=∅ eθW ′ = e−ψ ∏ W ′⊆{1,2,4} W ′ ̸=∅ eθW ′ + e−ψ ∏ W ′⊆{1,2,3,4} W ′ ̸=∅ eθW ′ = ( eθ{1}+θ{2}+θ{4}+θ{1,2}+θ{1,4}+θ{2,4}+θ{1,2,4}−ψ +eθ{1}+θ{2}+θ{3}+θ{4}+θ{1,2}+θ{1,3}+θ{1,4}+θ{2,3}+θ{2,4}+θ{3,4}+θ{1,2,3}+θ{1,2,4}+θ{1,2,3,4}−ψ ) −η{1,2}η{2,4} 3.3 HYPOTHESIS TESTING FOR ALGEBRAIC AND TOPOLOGICAL FEATURES Using the previous sections, we can now perform hypothesis testing on specific RF relationships or topological features such as holes. Given Pv(S) for all S ⊆ [n] as in the previous subsection, we can calculate ηW , for all W ⊆ [n], where ηW is equal to E (∏ i∈W xi ) = Prob{xi = 1,∀i ∈W}, using the following formula. ηW = ∑ S⊆[n],S⊇W Pv(S) Given a set of neurons A ⊆ [n], where |A| = k, we want to test whether there is a k-th order interaction between neurons in A or not. We can do this by hypothesis testing as follows. 1. Calculate θS and ηW , for all S,W ⊆ [n]. 2. Specify a coordinate for P(x; η, θ) based on A as ζAk = ( ηAk−; θ A k ) , where ηAk− = (ηH)H⊆[n],|H|≤k and θ A k = (θH)H⊆[n],|H|>k . 3. Set the corresponding null hypothesis coordinate as ζ0k = ( ηAk−; θ 0 k ) , where ηA = 0, ηH is equal to the previous step except for H = A, and θ0k is equal to the one in the previous step. 4. Determine the corresponding G(θ) = ( gθA,B ) A,B⊆[n] matrix related to θ-coordinate using equation 3.2. Arrange the rows and columns of G(θ) such that G(θ) = ( Aθ Bθ BTθ Dθ ) , where Aθ is the submatrix of G(θ) with row and column indices from all H ⊆ [n] with |H| ≤ k and Dθ is the submatrix of G(θ) with row and column indices from all H ⊆ [n] with |H| > k. 5. Determine the corresponding G(η) = ( gηA,B ) A,B⊆[n] matrix related to η-coordinate using the equation G(η) = G(θ)−1. We can write G(η) in the form G(η) = ( Aη Bη BTη Dη ) , where Aη is the submatrix of G(η) with row and column indices from all H ⊆ [n] with |H| ≤ k and Dη is the submatrix of G(η) with row and column indices from all H ⊆ [n] with |H| > k. 6. Determine the corresponding G(ζAk ) matrix related to the mixed coordinate ζ A k with G(ζAk ) = ( AζAK O O DζAK ) , where AζAK = A −1 θ and DζAK = D −1 η . 7. Calculate the test statistic as follows λ = 2 N∑ i=1 log ( P(xi; η A k−, θ 0 k) P(xi; ηAk−, θ A k ) ) ≈ 2NẼ ( log ( P(x; ηAk−, θ 0 k) P(x; ηAk−, θ A k ) )) ≈ 2ND [ P(x; ηAk−, θ 0 k);P(x; η A k−, θ A k ) ] ≈ NgζAA(η 0 A − ηA) ≈ Ngζ A k AA(η 0 A − ηA) where gζ A k AA is the entry of the G(ζ A k ) matrix. 8. Fix a level of significance α and find the value χ2α(1) (chi-square value with significance level α and degree of freedom 1) from the χ2 look-up table. 9. Compare λ and χ2 = max{χ2α(1), 1− χ2α(1)} • If λ ≥ χ2, there is a significant interaction between neurons in A (reject the null hypothesis) • Otherwise, there is no significant interaction between neurons in A (accept the null hypothesis) Since G scales at 2n, we use a subset M of all neurons, where A ⊂M and |M | = 10. We pick a set A relevant to the feature we want to test the significance of, and choose random neurons (without replacement) not already in A for the remaining elements of M , repeating the test until we exhaust all neurons. We then correct for multiple comparisons and use α = 0.05 to detect whether there is a significant interaction in A. The choice of A depends on which feature we wish to analyse. When analysing whether or not two neurons are disjoint or not in their RFs (a monomial relationship in CF (C)), we set A as those two neurons. When analysing whether the RF of i is contained within the RF of j (a mixed monomial relationship in CF (C)), we first set A as those two neurons, and then set A as i with a random set of neurons (repeating this at least 5 times, with different random sets, and correcting for multiple comparisons). For every dimension m where βm(∆,F2) > 0, when analysing whether a hole is significant, we test for all possible sets A ⊂ M ⊂ [n] which close the hole. If any test closes the hole, there is no hole, whereas if no test closes the whole, there is a hole. 4 APPLICATIONS 4.1 SPATIAL NAVIGATION IN BNNS Using the RatInABox simulation package (George et al., 2022), we created simple 2D navigation environments with 0, 1, 2, or 3 holes in the first dimension. We used a random cover of 40 place cells modelled using Gaussians for the probability of firing and geodesic receptive field geometries. Starting at a random position, we then simulated random walks governed by Ornstein-Uhlenbeck processes for 30m, with parameters based on rat locomotion data in (Sargolini et al., 2006). We constructed a combinatorial neural code C using a window size of 10ms, allowing for up to 3,000 unique codewords. We constructed ∆(C) up to dimension 2 and calculated β1(∆,F2), with the hypothesis that β1 would be equal to the respective number of holes in the environment. Figure 1 shows an example of a single place cell and part of a simulated trajectory for an environment with β1 = 1 and a geometric realisation of ∆(C) constructed after a 30 minute random walk. Table 1 shows the number of statistically significant holes found after different durations of the trajectories for environments with different topologies. Although after 10 minutes of a random walk some holes were occasionally detected, in all cases after 20 minutes all holes in the environment were detected consistently. There were a large number of of monomials across all conditions (all simulations had > 1000) due to the covering nature of the RF arrangements. There were also a small number (all simulations had < 5) of mixed monomials (RFs found to be subsets, significantly so, of other RFs). 4.2 VISUAL CLASSIFICATION IN ANNS We trained a multi-layer perceptron (MLP) to classify handwritten digits from the MNIST dataset (LeCun et al., 2010) (see Figure 2, top, for examples). The model consisted of an input layer with 784 neurons (the digit pixel values), followed by two hidden layers, each with 50 neurons using the rectified linear unit activation function and 20% dropout. The final output layer consisted of 10 neurons (corresponding to the 10 digit class labels) and used a softmax activation function. The data was split into 50,000 digits for training, 10,000 for validation, and 10,000 for testing, allowing for up to 10,000 unique codewords in our analysis. The network was trained over 10 epochs with a batch size of 32 samples. The optimiser was stochastic gradient descent (with learning rate 0.01 and momentum 0.5) and the criterion was the cross-entropy loss between the one-hot vector of the true class labels and the output layer’s activation for each sample. The MLP achieved > 96% accuracy after 10 epochs (Figure 2, middle). After each epoch, test samples which the network did not see during training were fed through the network and the activity of all neurons in both hidden layers was recorded. The recorded activities for each hidden layer corresponding to each sample were then binarized about their means (calculated over all samples) to create a code C of size 10, 000× 50 for each layer, which we denote C1 for layer one and C2 for layer two. The codes C1 and C2 showed differences in their algebraic and geometric structures across training epochs, and also differed between themselves (Table 2). In general, C1 had more overlapping RFs and spanned a larger number of real dimensions (assuming convexity) than C2. However, during training, we find both codes lower their dimensionality and gradually spread out their RFs to cover more of the space. This is also shown by the leftward shift between epoch 1 and 10 in the histograms of the number of co-active neurons in C2 (Figure 2, bottom). 5 DISCUSSION We have shown it is possible to analyse the intrinsic geometry and topology of combinatorial neural codes from biological and artificial networks in an efficient and probabilistic manner. With these improved methods, we can now comfortably study codes with tens and even hundreds of thousands of codewords. We have shown how these methods can be used to better understand (with some statistically surety) how the internal representations of external inputs within these networks can change through learning, experience, and network depth. Neuroscientists have shown combinatorial neural codes can occupy low-dimensional subspaces called neural manifolds in the covariance of their neural activities (Gallego et al., 2017; Feulner & Clopath, 2021). Trajectories and regions in these subspaces can correspond to task cognition, perceptual classification, and movement (Cohen et al., 2020; Chung & Abbott, 2021). For example, Gardner et al. (2022) show the activity of populations of hundreds of grid cells within single modules of medial entorhinal cortex (a brain area partly responsible for navigation) occupy positions on a toroidal manifold. Positions on this manifold correspond to positions in the 2D space which the animal is navigating in. These findings might lead us to believe combinatorial neural codes are intrinsically low-dimensional despite being embedded in the high-dimensional combinatorial space of neural activity. However, theoretical (Bartolo et al., 2020) and experimental (Rigotti et al., 2013) studies have shown that the dimensionality of these neural manifolds is influenced and often directly corresponds to the dimensionality of the task or learning under study. Indeed, the low-dimensional embeddings found by Gardner et al. (2022) are predicted by the two-dimensionality of the navigation (the underlying cause of the neural activity). Mathematically-optimal combinatorial neural codes and their RFs are also related to the dimensionality of the inputs those codes are attempting to represent (Wang et al., 2013). In more naturalistic and complex tasks, maintaining high-dimensional representations in the neural code may allow for increased expressibility but lower generalisability, whereas reducing to low-dimensional representations may allow for less expressibility but higher generalisability (Fusi et al., 2016; Badre et al., 2021). High-dimensional codes are often found in recordings from BNNs and are are often found when individual neurons encode for multiple input features, allowing linear read-out of a large number of complex or simple features (Fusi et al., 2016). Such neurons, for example in macaque inferotemporal cortex (Higgins et al., 2021), can also encode for very specific and independent higher-dimensional features. This implies combinatorial neural codes can include mixtures of coding strategies which are simultaneously low- and high-dimensional. One of the key advantages of the techniques developed and applied in this study is that we can consider these different dimensionalities of coding at the same time. We don’t reduce the embedding dimensionality to perform our analysis (which would be equivalent to assuming a low-dimensional code). We also don’t try to map individual neuron responses to experimenter-known but network-unknown external, high-dimensional variables (which would be equivalent to assuming a high-dimensional code). Instead, we keep the full, original dimensionality of the data and can identify low- or high-dimensional features and response relationships at local and global levels simultaneously, all without reference to information external to the neural network. We also provide a method for testing the statistical significance of these features and relationships, again while maintaining the original high embedding dimension of the data. This allows us to avoid making any strong assumptions about dimensionality of the task, stimuli, or the corresponding neural code – instead, we let the data speak for themselves. We do carry over some limitations from prior work, most prominently: (a) we assume joint-activity of neurons corresponds to common inputs or selectivity thereof; and (b) we binarize neural signals into ‘on’ and ‘off’ states. We suggest future work now focus on mitigating these limitations by: (a) performing causal inference tests on neural co-activations; and (b) considering polynomials over larger finite fields, e.g., F4, or extending these methods to more ‘continuous’ structures, e.g., manifolds.
1. What is the main contribution of the paper regarding combinatorial neural codes? 2. What are the strengths and weaknesses of the proposed algorithm and procedure? 3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? 4. Are there any questions regarding the paper's topic, significance, and relevance to the community? 5. Does the reviewer have suggestions for improving the paper's clarity, quality, and novelty?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The paper aims to improve the analysis of combinatorial neural codes by accelerating the existing algebraic algorithm from Curto et al. and applying the information-geometric procedure from Nakahara & Amari to estimate the statistical significance of various topological properties of the stimulus space. The proposed algorithm and procedure are evaluated on synthetic place-cell BNNs (for hole detection) and MLP trained on MNIST (for comparing various properties of neurons at different layers and epochs). Strengths And Weaknesses Strengths The topic is important, interesting and relevant to the community. Weaknesses (Clarity) The paper itself is unfortunately unreadable in my opinion, specifically the preliminaries (Sec 2.2 & 2.3) and proposed methods (Sec 3.2 & 3.3). Many of the variables, operators and functions are left undefined and/or unexplained in the paper, e.g. x , θ , G ( θ ) , G ( η ) , ⟨ ⋅ ⟩ , K e r ( ⋅ ) , I m ( ⋅ ) , etc., as well as terms e.g. negative/mixed monomials, lower bound of dimension, local obstruction to convexity, intersection complete, etc. More critically, the meanings and roles of the key tools (e.g. CF and its generator) in the paper are largely unprovided and can only be guessed. Experimentally, how exactly the proposed algorithm (Algorithm 1) and statistical testing procedure (Sec 3.2 & 3.3) are used e.g. to compute quantities in Table 1 & 2 is not clearly explained either. As such, there’s really no way to accurately evaluate the correctness and novelty of the paper to recommend its acceptance. (Novelty) The overall novelty of the paper is unclear and questionable. It’s unclear why Algorithm 1 is novel and superior compared to Curto et al. (Algorithm 2) since they are computing quite different things (CF generator vs CF itself) and the exact usage and comparative efficiency of Algorithm 1 in the experiments are also unclear. Moreover, it’s also unclear how Algorithm 1 compares to the “manageable” alternative algorithm (primary decomposition, or its corresponding subpart) described in Curto et al. The procedure using the information geometry to identify statistically significant topological properties (Sec 3.2 & 3.3) seems entirely borrowed from Nakahara & Amari. It’s unclear what novel changes have been made to the procedure, or why applying it in this work is a novel contribution (considering the popularity of Nakahara & Amari, 2002 and Amari, 2016). (Quality) The paper’s quality is unsatisfactory also due to inadequate evaluation (in addition to unclear correctness). Specifically, Algorithm 1 and the proposed procedure are not evaluated against existing methods at all in the experiments to properly prove their superiority in overall speed (not just Algorithm 1 vs Algorithm 2 of Curto et al.), resulting accuracy (in estimating the topological properties), etc. Also, simply binarizing conventional NNs for Sec 4.2 is inadequate due to inevitable distortions. Please consider using fully binary NNs such as [1] or newer algorithms [2]. (Significance) Due to the aforementioned issues in the paper’s clarity, novelty and quality, it’s hard to conclude that this work is significant. [1] Binarized Neural Networks, NeurIPS, 2016. [2] A comprehensive review of Binary Neural Network, https://arxiv.org/abs/2110.06804 Clarity, Quality, Novelty And Reproducibility Please see above for my evaluation of the clarity, quality and novelty of the paper. All aspects of the paper need to be substantially improved. Regarding reproducibility, the submission didn’t include anonymized source code and given the overall unclarity of the paper, I don’t find the paper reproducible.
ICLR
Title Efficient, probabilistic analysis of combinatorial neural codes Abstract Artificial and biological neural networks (ANNs and BNNs) can encode inputs in the form of combinations of individual neurons’ activities. These combinatorial neural codes present a computational challenge for direct and efficient analysis due to their high dimensionality and often large volumes of data. Here we improve the computational complexity – from factorial to quadratic time – of direct algebraic methods previously applied to small examples and apply them to large neural codes generated by experiments. These methods provide a novel and efficient way of probing algebraic, geometric, and topological characteristics of combinatorial neural codes and provide insights into how such characteristics are related to learning and experience in neural networks. We introduce a procedure to perform hypothesis testing on the intrinsic features of neural codes using information geometry. We then apply these methods to neural activities from an ANN for image classification and a BNN for 2D navigation to, without observing any inputs or outputs, estimate the structure and dimensionality of the stimulus or task space. Additionally, we demonstrate how an ANN varies its internal representations across network depth and during learning. 1 INTRODUCTION To understand the world around them, organisms’ biological neural networks (BNNs) encode information about their environment in the dynamics of spikes varying over time and space. Artificial neural networks (ANNs) use similar principles, except instead of transmitting spikes they usually transmit a real-valued number in the range of [0, 1] and their dynamics are typically advanced in a step-wise, discrete manner. Both BNNs and ANNs adjust their internal structures, e.g., connection strengths between neurons, to improve their performance in learned tasks. This leads to encoding input data into internal representations, which they then transform into task-relevant outputs, e.g., motor commands. Combinatorial neural coding schemes, i.e., encoding information in the collective activity of neurons (also called ‘population coding’), is widespread in BNNs (Averbeck et al., 2006; Osborne et al., 2008; Schneidman et al., 2011; Froudarakis et al., 2014; Bush et al., 2015; Stevens, 2018; Beyeler et al., 2019; Villafranca-Faus et al., 2021; Burns et al., 2022; Hannagan et al., 2021) and long-utilized in ANNs, e.g., in associative memory networks (Little, 1974; Hopfield, 1982; Tsodyks & Feigel'man, 1988; Adachi & Aihara, 1997; Krotov & Hopfield, 2016). Advances in mathematical neuroscience (Curto & Itskov, 2008; Curto et al., 2019) has led to the development of analyses designed to understand the combinatorial properties of neural codes and their mapping to the stimulus space. Such analyses were initially inspired by the combinatorial coding seen in place cells (Moser et al., 2008), where neurons represent physical space in the form of ensemble and individual activity (Brown & Alex, 2006; Fenton et al., 2008). Place fields, the physical spatial areas encoded by place cells, can be arranged such that they span multiple spatial dimensions, e.g., 3D navigation space in bats (Yartsev & Ulanovsky, 2013). They can also encode for ‘social place’ (Omer et al., 2018), the location of conspecifics. Just as these spatial and social dimensions of place (external stimuli) may be represented by combinatorial coding, so too may other dimensions in external stimuli, such as in vision (Fujii & Ito, 1996; Panzeri & Schultz, 2001; Averbeck et al., 2006; Froudarakis et al., 2014; Fetz, 1997). In place cells, the term receptive field (RF) or place field may intuitively be thought of as a physical place. In the context of vision, for example, we may think of RFs less spatially and more abstractly as representing stimuli features or dimensions along which neurons may respond more or less strongly, e.g., features such as orientation, spatial frequency, or motion (Niell & Stryker, 2008; Juavinett & Callaway, 2015). Two neurons which become activated simultaneously upon visual stimuli moving to the right of the visual field may be said to share the RF of general rightward motion, for example. We may also think of RFs even more abstractly as dimensions in general conceptual spaces, such as the reward–action space of a task (Constantinescu et al., 2016), visual attributes of characters or icons (Aronov et al., 2017), olfactory space (Bao et al., 2019), the relative positions people occupy in a social hierarchy (Park et al., 2021), and even cognition and behaviour more generally (Bellmund et al., 2018). In the method described in Curto et al. (2019), tools from algebra are used to extract the combinatorial structure of neural codes. The types of neural codes under study are sets of binary vectors C ⊂ Fn2 , where there are n neurons in states 0 (off) and 1 (on). The ultimate structure of this method is the canonical form of a neural code CF (C). The canonical form may be analysed topologically, geometrically, and algebraically to infer features such as the potential convexity of the receptive fields (RFs) which gave rise to the code, or the minimum number of dimensions those RFs must span in real space. Such analyses are possible because CF (C) captures the minimal essential set of combinatorial descriptions which describe all existing RF relationships implied by C. RF relationships (whether and how RFs intersect or are contained by one-another in stimulus space) are considered to be implied by C by assuming that if two neurons become activated or spike simultaneously, they likely receive common external input in the form of common stimulus features or common RFs. Given sufficient exploration of the stimulus space, it is possible to infer topological features of the global stimulus space by only observing C (Curto & Itskov, 2008; Mulas & Tran, 2020). To the best of our knowledge, these methods have only been developed and used for small examples of BNNs. Here we apply them to larger BNNs and to ANNs (by considering the co-activation of neurons during single stimulus trials). Despite the power and broad applicability of these methods (Curto & Itskov, 2008; Curto et al., 2019; Mulas & Tran, 2020), two major problems impede their usefulness: (1) the computational time complexity of the algorithms to generate CF (C) is factorial in the number of codewords O(nm!)1, limiting their use in large, real-world datasets; and (2) there is no tolerance for noise in C, nor consideration given towards the stochastic or probabilistic natures of neural firing. We address these problems by: (1) introducing a novel method for improving the time complexity to quadratic in the number of neurons O(n2) by computing the generators of CF (C) and using these to answer the same questions; and (2) using information geometry (Nakahara & Amari, 2002; Amari, 2016) to perform hypothesis testing on the presence/absence of inferred geometric or topological properties of the stimulus or task space. As a proof of concept, we apply these new methods to data from a simulated BNN for spatial navigation and a simple ANN for visual classification, both of which may contain thousands of codewords. 2 PRELIMINARIES Before describing our own technical developments and improvements, we first outline some of the key mathematical concepts and objects which we use and expand upon in later sections. For more detailed information, we recommend referring to Curto & Itskov (2008); Curto et al. (2019). 2.1 COMBINATORIAL NEURAL CODES Let F2 = {0, 1}, [n] = {1, 2, . . . , n}, and Fn2 = {a1a2 · · · an|ai ∈ F2, for all i}. A codeword is an element of Fn2 . For a given codeword c = c1c2 · · · cn,, we define its support as supp(c) = {i ∈ [n]|ci ̸= 0}, which can be interpreted as the unique set of active neurons in a discrete time bin which correspond to that codeword. A combinatorial neural code, or a code, is a subset of Fn2 . The support of a code C is defined as supp(C) = {S ⊆ [n]|S = supp(c) for some c ∈ C}, which can be interpreted as all sets of active neurons represented by all corresponding codewords in C. Let ∆ be a subset of 2[n]. The subset ∆ is an abstract simplicial complex if for any S ∈ ∆, the condition S′ ⊆ S gives S′ ∈ ∆, for any S′ ⊆ S. In other words, ∆ ⊆ 2[n] is an abstract simplicial 1n is the number of neurons and m is the number of codewords. In most datasets of interest n ≪ m. complex if it is closed under inclusion. So, the simplicial complex for a code C can be defined as ∆(C) = {S ⊆ [n]|S ⊆ supp(c), for some c ∈ C} . A set S in a simplicial complex ∆ is referred to as an (|S| − 1)-simplex. For instance, a set with cardinality 1 is called 0-simplex (geometrically, a point), a set with cardinality 2 is called a 1-simplex (geometrically, an edge), and so on. Let S be an m-simplex in ∆. Any S′ ⊆ S is called a face of S. 2.2 SIMPLICIAL COMPLEXES AND TOPOLOGY Let C ⊆ Fn2 be a code and ∆(C) be the corresponding simplicial complex of C. From now on, we will use ∆ to denote the corresponding simplicial complex of a code C. Define ∆m as a set of m-simplices in ∆. Define Cm = { ∑ S∈∆m αSS | αS ∈ F2,∀S ∈ ∆m } . The setCm forms a vector space over F2 whose basis elements are all them-simplicies in ∆m.Now, define the chain complex C∗(∆,F2) to be the sequence {Cm}m≥0 . For any m ≥ 1, define a linear transformation ∂m : Cm → Cm−1, where for any σ ∈ ∆m, ∂m(σ) = ∑m i=0 σ i, with σi ∈ ∆m−1 as a face of σ, for all i = 0, . . . ,m. Moreover, the map ∂m can be extended linearly to all elements in Cm as follows ∂m ( ∑ S∈∆m αSS ) = ∑ S∈∆m αS∂m(S). Define the m-th mod-2 homology group of ∆ as Hm(∆,F2) = Ker (∂m) Im (∂m+1) for all m ≥ 1 and H0(∆,F2) = C0 Im (∂1) . Note thatHm(∆,F2) is also a vector space over F2, for allm ≥ 0. So, the mod-2m-th Betti number βm(∆) of a simplicial complex ∆ is the dimension ofHm(∆,F2). The βm(∆,F2) gives the number of m-dimensional holes in the geometric realisation of ∆. 2.3 CANONICAL FORM Let σ and τ be subsets of [n],where σ∩τ = ∅. The polynomial of the form ∏ i∈σ xi ∏ j∈τ (1−xj) ∈ F2[x1m. . . , xn] is called a pseudo-monomial. In a given ideal J ⊆ F2[x1, . . . , xn], a pseudomonomial f in J is said to be minimal if there is no pseudo-monomial g in J with deg(g) < deg(f) such that f = gh for some h ∈ F2[x1, . . . , xn]. For a given code C ⊆ Fn2 , we can define a neural ideal related to C as JC = ⟨ρc′ |c′Fn2 − C⟩, where ρc′ is a pseudo-monomial of the form∏ i∈supp(c′) xi ∏ j ̸∈supp(c′) (1− xj) . A set of all minimal pseudo-monomials in JC , denoted by CF (JC) or simply CF (C), is called the canonical form of JC . Moreover, it can be shown that JC = ⟨CF (C)⟩. Therefore, the canonical form CF (C) gives a simple way to infer the RF relationships implied by all codewords in C. One way to calculate the CF (C) is by using a recursive algorithm described in Curto et al. (2019). For a code C = {c1, . . . , c|C|}, the aforementioned algorithm works by constructing canonical forms CF (∅), CF ({c1}) , CF ({c1, c2}) , . . . , CF (C) , respectively. In each stage, the algorithm evaluates polynomials, checks divisibility conditions, and adds or removes polynomials from a related canonical form. 3 METHODS Our main methodological contributions are: (1) improving the computational complexity of the analyses relying on computing CF (C) (see Algorithm 1); and (2) using information geometry to identify whether identified algebraic or topological features are statistically significant. 3.1 COMPUTING AND ANALYSING THE CANONICAL FORM’S GENERATORS We may perform the same analyses as in Curto et al. (2019) in quadratic time by using Algorithm 1 to construct the generators of CF (C) rather than constructing CF (C) itself (as in Algorithm 2 of Curto et al. (2019)). Illustrative of this efficiency, representative experimental data with 25 neurons and 46 codewords took < 1 second to analyse on a high-end desktop PC (Intel i7 CPU and 64GB of memory), compared to 2 minutes 57 seconds using Algorithm 2 from Curto et al. (2019). Algorithm 1 Algorithm for computing generators of CF (C) Input: M = C ⊂ Fn2 as a patterns× neurons matrix Initialize: D ← empty list ▷ Stores the monomials. P ← empty list ▷ Stores the mixed monomial constructor tuples (σ,τ ). B ← empty list ▷ Stores the mixed monomial constructor tuples (τ ,σ). for each column i of M do for each column j of M do s← ∑ k(i · j)k if s < 1 then ▷ The pair i, j have disjoint receptive fields. append {i, j} to D else j′ ← j − 1 b← ∑ k=1(i · j′)k if b = 0 then ▷ The receptive field of j is a subset of receptive field of i. append (i, j) to P append (j, i) to B end if end if end for end for Generating desired elements of JC is then straightforward: monomials are supersets of disjoint pairs (from D) where each pair set has one element shared with at least one other disjoint pair set in the superset; mixed monomials are all possible combinations of first (σ set) and second (τ set) elements in the tuples of P (or vice-versa for B) – we do not allocate all of these elements but instead store the set constructors; and the negative monomial appears if and only if the all 1s codeword exists (which involves a simple summing check on columns of M ). 3.2 INFORMATION GEOMETRY FOR COMBINATORIAL NEURAL CODES Let N be the finite number of time bins for data of the neural activity patterns on n neurons. For any S ⊆ [n], let v(S) ∈ Fn2 , where supp(v(S)) = S and Pv(S) = #{v(S)} N . We would like to find the parameters θ = ( θS1 , θS2 , . . . , θS2n−1 ) , where Si ⊆ [n], Si ̸= ∅, and S2n−1 = [n], such that the following exponential function P(x, θ) = exp ∑ S⊆[n],S ̸=∅ θSxS − ψ , where xS = ∏ i∈S xi and ψ = − log(Pv(∅)), describes a neural activity pattern from the given neural activity data. We can calculate θS using the following formula for any S ⊆ [n], where S ̸= ∅, θS = log ( Pv(S) Pv(∅) ∏ S′⊊S,S′ ̸=∅ exp (θS′) ) , ηW = ∑ S⊆[n],S⊇W Pv(S). Given a θ-coordinate, we can calculate the associated G(θ) = ( gθA,B ) A,B⊆[n] matrix using the following formula, gθA,B = Eθ (XAXB)− ηAηB = ∑ W⊇A∪B e−ψ ∏ W ′⊆W W ′ ̸=∅ eθW ′ − ηAηB , = ∑ W⊇A∪B e−ψe ∑ W ′⊆W W ′ ̸=∅ θW ′ − ηAηB , where ψ = − log(Pv(∅)). Example 3.1. Let n = 4, A = {1, 2}, and B = {2, 4}, then gθA,B = ∑ W⊇{1,2,4} e−ψ ∏ W ′⊆W W ′ ̸=∅ eθW ′ = e−ψ ∏ W ′⊆{1,2,4} W ′ ̸=∅ eθW ′ + e−ψ ∏ W ′⊆{1,2,3,4} W ′ ̸=∅ eθW ′ = ( eθ{1}+θ{2}+θ{4}+θ{1,2}+θ{1,4}+θ{2,4}+θ{1,2,4}−ψ +eθ{1}+θ{2}+θ{3}+θ{4}+θ{1,2}+θ{1,3}+θ{1,4}+θ{2,3}+θ{2,4}+θ{3,4}+θ{1,2,3}+θ{1,2,4}+θ{1,2,3,4}−ψ ) −η{1,2}η{2,4} 3.3 HYPOTHESIS TESTING FOR ALGEBRAIC AND TOPOLOGICAL FEATURES Using the previous sections, we can now perform hypothesis testing on specific RF relationships or topological features such as holes. Given Pv(S) for all S ⊆ [n] as in the previous subsection, we can calculate ηW , for all W ⊆ [n], where ηW is equal to E (∏ i∈W xi ) = Prob{xi = 1,∀i ∈W}, using the following formula. ηW = ∑ S⊆[n],S⊇W Pv(S) Given a set of neurons A ⊆ [n], where |A| = k, we want to test whether there is a k-th order interaction between neurons in A or not. We can do this by hypothesis testing as follows. 1. Calculate θS and ηW , for all S,W ⊆ [n]. 2. Specify a coordinate for P(x; η, θ) based on A as ζAk = ( ηAk−; θ A k ) , where ηAk− = (ηH)H⊆[n],|H|≤k and θ A k = (θH)H⊆[n],|H|>k . 3. Set the corresponding null hypothesis coordinate as ζ0k = ( ηAk−; θ 0 k ) , where ηA = 0, ηH is equal to the previous step except for H = A, and θ0k is equal to the one in the previous step. 4. Determine the corresponding G(θ) = ( gθA,B ) A,B⊆[n] matrix related to θ-coordinate using equation 3.2. Arrange the rows and columns of G(θ) such that G(θ) = ( Aθ Bθ BTθ Dθ ) , where Aθ is the submatrix of G(θ) with row and column indices from all H ⊆ [n] with |H| ≤ k and Dθ is the submatrix of G(θ) with row and column indices from all H ⊆ [n] with |H| > k. 5. Determine the corresponding G(η) = ( gηA,B ) A,B⊆[n] matrix related to η-coordinate using the equation G(η) = G(θ)−1. We can write G(η) in the form G(η) = ( Aη Bη BTη Dη ) , where Aη is the submatrix of G(η) with row and column indices from all H ⊆ [n] with |H| ≤ k and Dη is the submatrix of G(η) with row and column indices from all H ⊆ [n] with |H| > k. 6. Determine the corresponding G(ζAk ) matrix related to the mixed coordinate ζ A k with G(ζAk ) = ( AζAK O O DζAK ) , where AζAK = A −1 θ and DζAK = D −1 η . 7. Calculate the test statistic as follows λ = 2 N∑ i=1 log ( P(xi; η A k−, θ 0 k) P(xi; ηAk−, θ A k ) ) ≈ 2NẼ ( log ( P(x; ηAk−, θ 0 k) P(x; ηAk−, θ A k ) )) ≈ 2ND [ P(x; ηAk−, θ 0 k);P(x; η A k−, θ A k ) ] ≈ NgζAA(η 0 A − ηA) ≈ Ngζ A k AA(η 0 A − ηA) where gζ A k AA is the entry of the G(ζ A k ) matrix. 8. Fix a level of significance α and find the value χ2α(1) (chi-square value with significance level α and degree of freedom 1) from the χ2 look-up table. 9. Compare λ and χ2 = max{χ2α(1), 1− χ2α(1)} • If λ ≥ χ2, there is a significant interaction between neurons in A (reject the null hypothesis) • Otherwise, there is no significant interaction between neurons in A (accept the null hypothesis) Since G scales at 2n, we use a subset M of all neurons, where A ⊂M and |M | = 10. We pick a set A relevant to the feature we want to test the significance of, and choose random neurons (without replacement) not already in A for the remaining elements of M , repeating the test until we exhaust all neurons. We then correct for multiple comparisons and use α = 0.05 to detect whether there is a significant interaction in A. The choice of A depends on which feature we wish to analyse. When analysing whether or not two neurons are disjoint or not in their RFs (a monomial relationship in CF (C)), we set A as those two neurons. When analysing whether the RF of i is contained within the RF of j (a mixed monomial relationship in CF (C)), we first set A as those two neurons, and then set A as i with a random set of neurons (repeating this at least 5 times, with different random sets, and correcting for multiple comparisons). For every dimension m where βm(∆,F2) > 0, when analysing whether a hole is significant, we test for all possible sets A ⊂ M ⊂ [n] which close the hole. If any test closes the hole, there is no hole, whereas if no test closes the whole, there is a hole. 4 APPLICATIONS 4.1 SPATIAL NAVIGATION IN BNNS Using the RatInABox simulation package (George et al., 2022), we created simple 2D navigation environments with 0, 1, 2, or 3 holes in the first dimension. We used a random cover of 40 place cells modelled using Gaussians for the probability of firing and geodesic receptive field geometries. Starting at a random position, we then simulated random walks governed by Ornstein-Uhlenbeck processes for 30m, with parameters based on rat locomotion data in (Sargolini et al., 2006). We constructed a combinatorial neural code C using a window size of 10ms, allowing for up to 3,000 unique codewords. We constructed ∆(C) up to dimension 2 and calculated β1(∆,F2), with the hypothesis that β1 would be equal to the respective number of holes in the environment. Figure 1 shows an example of a single place cell and part of a simulated trajectory for an environment with β1 = 1 and a geometric realisation of ∆(C) constructed after a 30 minute random walk. Table 1 shows the number of statistically significant holes found after different durations of the trajectories for environments with different topologies. Although after 10 minutes of a random walk some holes were occasionally detected, in all cases after 20 minutes all holes in the environment were detected consistently. There were a large number of of monomials across all conditions (all simulations had > 1000) due to the covering nature of the RF arrangements. There were also a small number (all simulations had < 5) of mixed monomials (RFs found to be subsets, significantly so, of other RFs). 4.2 VISUAL CLASSIFICATION IN ANNS We trained a multi-layer perceptron (MLP) to classify handwritten digits from the MNIST dataset (LeCun et al., 2010) (see Figure 2, top, for examples). The model consisted of an input layer with 784 neurons (the digit pixel values), followed by two hidden layers, each with 50 neurons using the rectified linear unit activation function and 20% dropout. The final output layer consisted of 10 neurons (corresponding to the 10 digit class labels) and used a softmax activation function. The data was split into 50,000 digits for training, 10,000 for validation, and 10,000 for testing, allowing for up to 10,000 unique codewords in our analysis. The network was trained over 10 epochs with a batch size of 32 samples. The optimiser was stochastic gradient descent (with learning rate 0.01 and momentum 0.5) and the criterion was the cross-entropy loss between the one-hot vector of the true class labels and the output layer’s activation for each sample. The MLP achieved > 96% accuracy after 10 epochs (Figure 2, middle). After each epoch, test samples which the network did not see during training were fed through the network and the activity of all neurons in both hidden layers was recorded. The recorded activities for each hidden layer corresponding to each sample were then binarized about their means (calculated over all samples) to create a code C of size 10, 000× 50 for each layer, which we denote C1 for layer one and C2 for layer two. The codes C1 and C2 showed differences in their algebraic and geometric structures across training epochs, and also differed between themselves (Table 2). In general, C1 had more overlapping RFs and spanned a larger number of real dimensions (assuming convexity) than C2. However, during training, we find both codes lower their dimensionality and gradually spread out their RFs to cover more of the space. This is also shown by the leftward shift between epoch 1 and 10 in the histograms of the number of co-active neurons in C2 (Figure 2, bottom). 5 DISCUSSION We have shown it is possible to analyse the intrinsic geometry and topology of combinatorial neural codes from biological and artificial networks in an efficient and probabilistic manner. With these improved methods, we can now comfortably study codes with tens and even hundreds of thousands of codewords. We have shown how these methods can be used to better understand (with some statistically surety) how the internal representations of external inputs within these networks can change through learning, experience, and network depth. Neuroscientists have shown combinatorial neural codes can occupy low-dimensional subspaces called neural manifolds in the covariance of their neural activities (Gallego et al., 2017; Feulner & Clopath, 2021). Trajectories and regions in these subspaces can correspond to task cognition, perceptual classification, and movement (Cohen et al., 2020; Chung & Abbott, 2021). For example, Gardner et al. (2022) show the activity of populations of hundreds of grid cells within single modules of medial entorhinal cortex (a brain area partly responsible for navigation) occupy positions on a toroidal manifold. Positions on this manifold correspond to positions in the 2D space which the animal is navigating in. These findings might lead us to believe combinatorial neural codes are intrinsically low-dimensional despite being embedded in the high-dimensional combinatorial space of neural activity. However, theoretical (Bartolo et al., 2020) and experimental (Rigotti et al., 2013) studies have shown that the dimensionality of these neural manifolds is influenced and often directly corresponds to the dimensionality of the task or learning under study. Indeed, the low-dimensional embeddings found by Gardner et al. (2022) are predicted by the two-dimensionality of the navigation (the underlying cause of the neural activity). Mathematically-optimal combinatorial neural codes and their RFs are also related to the dimensionality of the inputs those codes are attempting to represent (Wang et al., 2013). In more naturalistic and complex tasks, maintaining high-dimensional representations in the neural code may allow for increased expressibility but lower generalisability, whereas reducing to low-dimensional representations may allow for less expressibility but higher generalisability (Fusi et al., 2016; Badre et al., 2021). High-dimensional codes are often found in recordings from BNNs and are are often found when individual neurons encode for multiple input features, allowing linear read-out of a large number of complex or simple features (Fusi et al., 2016). Such neurons, for example in macaque inferotemporal cortex (Higgins et al., 2021), can also encode for very specific and independent higher-dimensional features. This implies combinatorial neural codes can include mixtures of coding strategies which are simultaneously low- and high-dimensional. One of the key advantages of the techniques developed and applied in this study is that we can consider these different dimensionalities of coding at the same time. We don’t reduce the embedding dimensionality to perform our analysis (which would be equivalent to assuming a low-dimensional code). We also don’t try to map individual neuron responses to experimenter-known but network-unknown external, high-dimensional variables (which would be equivalent to assuming a high-dimensional code). Instead, we keep the full, original dimensionality of the data and can identify low- or high-dimensional features and response relationships at local and global levels simultaneously, all without reference to information external to the neural network. We also provide a method for testing the statistical significance of these features and relationships, again while maintaining the original high embedding dimension of the data. This allows us to avoid making any strong assumptions about dimensionality of the task, stimuli, or the corresponding neural code – instead, we let the data speak for themselves. We do carry over some limitations from prior work, most prominently: (a) we assume joint-activity of neurons corresponds to common inputs or selectivity thereof; and (b) we binarize neural signals into ‘on’ and ‘off’ states. We suggest future work now focus on mitigating these limitations by: (a) performing causal inference tests on neural co-activations; and (b) considering polynomials over larger finite fields, e.g., F4, or extending these methods to more ‘continuous’ structures, e.g., manifolds.
1. What is the focus and contribution of the paper on neural codes? 2. What are the strengths of the proposed approach, particularly in terms of time complexity improvement and information geometry usage? 3. What are the weaknesses of the paper, especially regarding the figure quality and potential lack of familiarity with related work? 4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Summary Of The Paper Strengths And Weaknesses Clarity, Quality, Novelty And Reproducibility
Summary Of The Paper The work is about extracting the combinatorial (co-activation) structure of neural codes. It introduces a new method for improving the time complexity to quadratic in the number of neurons, and use information geometry for hypothesis testing on the presence of geometric or topological properties. Strengths And Weaknesses Strong but dense paper. Potentially this should be a journal paper. Important and interesting approach that can help explain the workings of BNNs and ANNs. Smaller point: Figure 2 has low resolution. Clarity, Quality, Novelty And Reproducibility Could benefit from more space to explain the concepts introduced as they draw from many different parts of mathematics. Otherwise clear and high quality. The novelty is a little hard to assess as I'm not too familiar with related work.